LongCut logo

NotebookLM with Steven Johnson and Raiza Martin

By Google for Developers

Summary

## Key takeaways - **AI as a personalized collaborator**: NotebookLM's core value lies in its ability to act as a personalized AI, trained on a user's own data, enabling deeper understanding and creative exploration rather than just generic chatbot interactions. [42:04], [58:49] - **The Adjacent Possible in AI development**: The concept of the 'adjacent possible' describes how new foundational technologies, like AI, open up a vast, previously unimaginable space of possibilities, driving rapid innovation and product development. [19:31], [21:00] - **From 'crazy' to 'cool' product development**: The journey of NotebookLM, from an experimental idea to a widely used tool, highlights a transition from a 'crazy' phase of building the impossible to a 'cool' phase where the technology delivers tangible utility and delight. [26:03], [28:07] - **AI democratizes complex information access**: Tools like NotebookLM are making complex information accessible through conversational interfaces, transforming learning from rote memorization to dialogue-based exploration, which is a more ancient and effective method. [49:35], [50:08] - **Gemini's impact on NotebookLM's capabilities**: The integration of Gemini, particularly with its longer context window and multimodal capabilities, has been a significant threshold moment for NotebookLM, allowing it to 'sing' and enabling features like native citations. [24:19], [24:45] - **NotebookLM's origin in a 'Tools for Thought' project**: NotebookLM began as a 'Tools for Thought' project within Google Labs, inspired by Steven Johnson's long-standing interest in software that aids thinking and creativity, aiming to leverage large language models for research and writing. [07:07], [17:53]

Topics Covered

  • AI massively expanded the adjacent possible.
  • AI can now search for 'interestingness'.
  • Personalized AI is built on your own sources.
  • AI is a partner for conversational learning.
  • Soon, 'person of AI' will be an obsolete term.

Full Transcript

ASHLEY OLDACRE: Welcome to the "People of AI Podcast,"

showcasing inspiring people with interesting stories in the field

of artificial intelligence.

I'm Ashley Oldacre.

Let's jump right in.

[MUSIC PLAYING]

Hi everyone.

Ashley here.

GUS MARTINS: Hi, I'm Gus here.

ASHLEY OLDACRE: Gus, we have some really exciting guests

today who are joining us, so I'll start off

with reading their bios, and then

we'll get into their stories.

So we'll start off with Steven.

Steven Johnson is the editorial director

of NotebookLM and Google Labs.

He is also the best-selling author of 14 books,

including "Where Good Ideas Come From," "Farsighted,"

and "The Ghost Map."

He was the host and co-creator of the Emmy-winning PBS BBC

series "How We Got to Now" and "Extra Life"

and writes the newsletter "Adjacent Possible."

His Ted Talks on the history of innovation have been viewed more

than 10 million times, and he is a contributing writer

for "The New York Times Magazine."

He lives in Brooklyn, New York and Marin County, California,

with his wife and three sons.

All right, Raiza.

Raiza Martin is a senior product manager

in Google Labs, leading NotebookLM.

As part of the founding team, she

led the initial discovery, ideation and launch

of the product, including building the team.

Previously, she worked on launching AI Test

Kitchen in Google Labs and has worked in the payments and ads

organizations in the past.

Before Google, Raiza worked exclusively at startups.

Welcome Raiza.

We're so happy to have you both here.

GUS MARTINS: Hi, everyone.

Very happy to be here.

STEVEN JOHNSON: Yeah, so are we.

GUS MARTINS: Nice I'm very excited.

First of all, I use NotebookLM every day.

I'm a little bit shy to say that, but every day.

STEVEN JOHNSON: That's so great.

We love to hear that.

GUS MARTINS: Yes, I have some cool use cases.

But before we go there, Steven, can you

tell us a little bit more about your story?

And just to be fair, I put all documents

I could find about you on NotebookLM, and it was a lot,

so I would love to hear from you directly.

[LAUGHTER]

ASHLEY OLDACRE: Amazing.

STEVEN JOHNSON: So it's been a really interesting journey

to this product and to Google and to working with Raiza.

So most of my career, I spent, as a writer,

working on largely writing books,

and then at various points, doing podcasts and TV

shows, largely about the history of ideas and innovation.

But throughout that process, I was always

really interested in the tools that I

was using to write the books and to research the books

and to share ideas with my collaborators

when I was working on the TV shows and so on.

And so from an early age, I mean--

I think this really goes back to college for me.

In the late '80s, I got obsessed with this program that Apple

released called HyperCard.

And I had this vision of organizing all my notes

from my classes using this software.

That never really went anywhere, but I kind of

lost a semester trying to get this software

to work because I thought if I could just do this,

then I will be able to think more interesting ideas

and keep track of everything in a better way.

So for some weird, nerdy reason, I've

always had this interest in how can software

help us think more clearly or make more creative associations

and so on?

And so as I got older and as my career developed as a writer,

I developed a little bit of a kind

of side hustle writing about the tools that I was using.

And so I wrote a number of pieces

and blog posts over the years.

I started using a tool called DEVONthink,

and I used a wonderful writing tool called Scrivener.

And I would write these little posts.

Occasionally I would write for "The Times Magazine:

and "The Times Book Review," actually,

about how I was using software to help me write.

And then in a book I wrote called "Where Good ideas Come

From," I talked about that at length, which is this idea like,

OK, we have certain environments that are really conducive

to good thinking in the real world.

Maybe there are certain software environments

that are conducive to more creative thinking.

And what would those look like?

And so it'd been something that was always

kind of on the sidelines of what I was doing as I was working

on all these projects.

And then fast forward to the fall of 2020 or 2021.

I went to "The Times Magazine," and I was like,

I want to write a piece about these large language models.

This is a year before the ChatGPT moment,

but GPT 3 had been released at that point.

Google had PaLM and LaMDA, which were also very powerful models,

but they weren't publicly sharing them in any way.

But you could-- if you really tried hard,

you could get access to GPT 3, OpenAI's model.

And so I wanted to write a piece about that.

And so in October of '21, I finally

get access to this language model.

And I just had this--

I just remember sitting alone in my study for three hours

and thinking, everything is going to change.

This is such a fundamental breakthrough.

The ability of these models to converse with you

and to have some kind of knowledge about the world,

but mostly just their mastery of language.

And it seemed like that whole thing

I'd been chasing for so much of my life

about using tools with writing and researching,

like, suddenly there was a whole new set of possibilities here.

And so I wrote this piece that came out in April of 2022,

and it may have been the most controversial piece

I've ever written in my life.

There were so many people who were like, oh, Johnson

fell for the hype.

He thinks that these language models are

going to be such a big deal.

But everybody knows it's just autocomplete on steroids.

And it's not going to be useful at all.

And how sad.

And I mean, there were some people

who were influenced by it in a positive way, but it was brutal.

It was really-- the pushback online was really intense.

However, there were two people at Google,

Clay Bavor, who has since left, and Josh Woodward, who now--

they had just created Labs, Google Labs.

And Josh now runs labs.

And they had been reading my books over the years.

And they both read the piece.

And Labs had been founded with this idea

that maybe we would do these interesting co-creations

with outsiders, and not just if you were building a music

product, don't just have a bunch of technologists

build a music product.

Why don't we bring in a musician and have them

in the room for the whole life of the product?

So they had this idea that they would

design the product development process a little differently

inside of Labs.

And so I think it was Josh came up with the idea of I

wonder if we could get Steven Johnson to come because he's

obsessed with tools for thought?

And maybe we could get him to come part-time.

And we could use him to build some kind of language

model-based software tool for writing and thinking.

And so Clay emailed me out of the blue, proposed this.

And I said, that sounds a really good idea.

I would be interested in doing that.

And so I showed up at Google, initially part-time,

in late July--

I think it was, like, July 25, 2022-- and I just saw--

Josh just shared the email he sent me on that day.

And it's like, OK, we're going to start this Tools for Thought

project, and we're going to pretend that it's a startup.

And I'm going to get you access to the best people and the best

new interesting-related research inside of Google,

and we're just going to go for it.

And that was the day that I met Raiza Martin.

ASHLEY OLDACRE: Wow.

STEVEN JOHNSON: And we were off to the races.

RAIZA MARTIN: That's so cool.

It's always so cool hearing Steven's side of the story,

especially because that article that he

refers to that he wrote this controversial piece, that

was the same article that I read before I'd even met Steven that

got me to join Labs.

I had heard about-- of course, I'd heard about AI.

I'd heard about LLMs.

And up until the point where I read the article,

it was just sort of interesting, but I didn't think about diving

deeply into it.

But when I read it, I was like, OK, that's it.

That's the next thing I'm doing.

I'm probably going to do it for the next several decades.

STEVEN JOHNSON: Why weren't you defending me on social media?

[LAUGHTER]

ASHLEY OLDACRE: I was about to say, where do you

stand on this controversial--

RAIZA MARTIN: I thought it was not controversial.

I read it, and I thought, what a great guy.

I love this.

Good for him writing this.

ASHLEY OLDACRE: Excellent.

Well, so we've heard your wonderful story.

And now Raiza, let's jump in and dive into your story.

RAIZA MARTIN: Yeah, yeah.

Well, it's so cool.

Getting to talk about it, I'll tell you

that, I think even from a very young age,

I've just always loved technology.

And I think about some of my favorite memories

just in life in general.

And there's always something there about how technology

was a part of my life.

And one of my earliest memories is

the very first computer that I owned,

my dad and I actually built together.

ASHLEY OLDACRE: Oh, wow.

RAIZA MARTIN: And the reason we were building it actually

is because computers were super expensive.

We lived in the Philippines.

They were super expensive to buy out of the box,

just already built. And my dad had this idea.

He was like, I bet it's not that hard.

Just make your own.

We just have to figure out the things that go in it,

and we'll build it ourselves.

And I was like, well, that makes a lot of sense.

Let's do it.

And so--

ASHLEY OLDACRE: How old were you.

RAIZA MARTIN: Oh, I was probably, like, 12.

Yeah, I was 12, maybe a little bit younger because I remember--

yeah, I think I was younger.

But it was one of those really formative memories

where I was like, well, my dad, who is a doctor,

just had this idea about doing something

that I'm pretty sure he had no idea how to do,

just went out and did it.

And it was so formative for me because we succeeded.

We built the computer.

It worked, and it was magical for me.

I got to play games on that thing.

One of my favorite applications was Word, just typing.

And I was like, wow, this is better than paper.

This is incredible.

It's better than a typewriter.

And it changed my life.

And I think the next product that

really did that for me was I remember, for my high school

graduation--

I was 16-- and my parents were like, well,

what do you want for your graduation?

And I had saved up all of this money,

and I wanted to buy this thing called the iPod.

And I remember the feeling and the experience of opening it.

And I don't know if folks recall,

but they used to come in these giant boxes

where the box was huge, and the iPod was small.

And I was so confused.

I was like, why is the box so big?

But I remember the feeling of I would open the box,

and I was like, this feels different.

This feels cool, right?

When you would open the white box, I was like,

this is crazy to me that there is somebody out

there that is just thinking about how to build these things.

And I remember telling my dad--

I looked at the back and it said, built in Cupertino,

like, the 1 Infinite Loop.

I told my dad, where is this?

I want to work here.

My dad was like, that's super far.

That's in the United States.

And I was like, OK, how do we go there?

Because I want to do this.

And it wasn't actually that much longer

after that, that we moved to the United States.

We came here.

And I was like, OK, Dad, is it now?

Can I do it now?

He was like, oh, I think you probably have to study.

But yeah, he's like, sure, it's now.

So one of the first things I did is I actually wrote to Apple.

And I wrote them a--

I can't remember where I got this email.

But I wrote to them, and I was like, hey,

been a big fan forever.

Can I get a job?

And somebody wrote back to me and said, yeah, sure.

You could totally get a job, but it's like,

what's your background?

What are your skills?

And I told them I was young.

I was just a kid.

They were like, OK, so it sounds like you could probably

work in one of our stores.

So I went to this thing, and I was like, no, this is not it.

And I was like, I don't want sell stuff.

They were like, well, what is it that you're interested in doing?

And I told them I really loved technology.

And they were like, OK, well, maybe you can repair computers.

I was like, for sure.

For sure, that's what I want to do.

And so one of my earliest jobs was actually repairing computers

at an Apple store.

And fast forward many years later, I

went from that to startups to then Google.

And it was at Google that I realized--

I was like, wow, I really love this.

I really love building fun, cool, useful things for people,

and that's how I ended up in Labs--

is I was like, this is an opportunity for me

to build something totally new, using totally new technology,

and will probably discover the actual utilities of this thing

as we go.

And I remember when I joined, Josh, actually--

kind of similarly, we had this chat about what

the future of Labs was.

And Josh's mandate to me was, build a business.

Build a new thing.

And that was it.

He didn't tell me what product to build.

He didn't tell me how to build it.

It was just the best blank check I've ever gotten in my life.

[LAUGHTER]

ASHLEY OLDACRE: Wow.

STEVEN JOHNSON: And the AI Test Kitchen

stuff there is important, too.

RAIZA MARTIN: Oh, yes, yes.

STEVEN JOHNSON: You were involved in that

because that was really the first point that Google

was exposing any of the AI directly.

RAIZA MARTIN: That's right, that's right.

So actually, when I first joined Labs,

I joined to launch AI Test Kitchen, which

was the first AI experience that Google had launched.

And it was such a different time back then that I've almost

forgotten about it, but this was only two years ago, right?

And I remember, aside from build a business, he was like,

and launch this.

[LAUGHTER]

So yeah, we were doing both.

ASHLEY OLDACRE: So your introduction, Steven, to AI

was through testing PaLM and LaMDA first and having

access to those models.

Where was your-- so you were brought in to build AI Test

Kitchen, but what was your entry into--

did you have an entry into AI before that?

RAIZA MARTIN: Oh, my goodness.

So before AI, I actually worked in payments.

And before payments, I worked in ads and before ads, startups.

But I had no AI background whatsoever.

It was just literally Steven's article

that I used as a map of sorts.

I was like, OK, this is what he said the thing is.

Now I'm going to go Google and learn as much as I can

about this before I even start in the role.

ASHLEY OLDACRE: So he was your introduction.

RAIZA MARTIN: He was 100%-

STEVEN JOHNSON: What a terrible guy.

[LAUGHTER]

Ridiculous, starry-eyed optimist.

RAIZA MARTIN: It's really funny because people are like, wow,

you really knew AI was going to blow up.

And I was like, no, don't give me that much credit.

ASHLEY OLDACRE: He knew it.

RAIZA MARTIN: He knew it.

And I was like, yes, I agree.

Let's go.

And what's funny about this is then I joined Labs,

and I got access to all of the models.

And I got to experience firsthand

what Steven was talking about.

And I had this thought in my head of the same thing--

it's going to change the world.

But how does Google bring it to the world?

What is our first entry here?

STEVEN JOHNSON: And I think there's a really important thing

that happened for both of us right around that point

where there was basically a classic 20% time

project for Google that had started, I think,

in June of 2022, so a month or two before Raiza and I met

and I arrived.

And it was really put together by these two people, really.

It was Dale and Adam.

And Adam Bignell then subsequently became

a founding engineer for what became NotebookLM.

But it was this terribly named project,

but it was very visionary.

It was named Talk to a Small Corpus.

And by corpus, it was not talk to a small corpse.

It was Talk to a Small Corpus.

And by corpus, it meant talk to a small body of text.

And the idea, which, really, in a way,

is the seedling of everything that NotebookLM

became, was that what if, instead

of having a conversation with a general purpose language model--

the way people were doing with ChatGPT,

the way people were doing at PaLM and LaMDA

inside of Google--

what if you actually gave that model some set of documents

and said, please answer the questions I ask you

based on these documents, or use these documents as the ground

truth for your answers?

And there was interest in that because it seemed,

one, that would reduce hallucinations.

Two, it would allow you to personalize the experience.

You could give it the information you were working on,

or your book that you were writing, or your research notes,

or all the things that I was interested in.

And so they had begun doing this.

It was very hard to do it, in part, because the context

window of the model was so small,

so you couldn't really put that much information in there.

But Adam had built up this early prototype

of it just using just in Colab.

And part of this story-- we can get into this--

is both Adam and Dale were also writers on the side.

They both were novelists, actually fiction writers.

And so there was, from the very beginning,

this interesting humanities literary side

to the team that was going to ultimately assemble NotebookLM.

And I saw this-- and Raiza and I both saw this,

and we were like, ooh, this is interesting.

This is cool.

And so Adam and I got together, and we put a little bit

of my book, "Wonderland," which had come out a few--

I don't know why we even chose that one,

and it was kind of random.

But we put some passages from that book into this Colab.

And at some point in the middle of August of 2022,

I remember asking questions of an AI,

and it would answer and say, the answer is this,

and here are the passages from the book that

are relevant to that answer.

And I was like, OK, no computer in the world

could do this six months ago, and now they can.

And that could be-- it looked hideous.

If you had to actually imagine it as a consumer product,

it was very hard to, but there was a hint of something

that was really powerful.

And that's the seeds of what then became NotebookLM.

ASHLEY OLDACRE: OK, so let's jump into NotebookLM,

and I'll transition over to Gus.

GUS MARTINS: Hi.

Before we go there, I learned a term called adjacent possible,

which is your newsletter name.

And there's a concept behind it, and I love the idea.

Do you know when you learn something new?

Oh, I understand that.

It's something that I've seen happen.

And can you explain that to us a little bit?

STEVEN JOHNSON: Yeah, Raiza and I

talk about it because we feel like it's

shaping a lot of the way that we're thinking about where

we are right now with the product, and, in a way,

where we were two years ago, too.

So it's a term-- the phrase actually

comes from the wonderful scientist Stuart Kauffman.

I kind of popularized it, but Stuart came up with it

originally.

And it's basically a term that describes

both biological evolution and cultural evolution

or technological evolution, which is basically, like,

at a given point in a system that is changing over time.

When somebody, say, invents a new foundational technology,

it opens up all these doors of possibility

that weren't possible before.

And at various different points in history,

that space of possibility is the adjacent possible at that

moment in technological history.

So in 1830, nobody was trying to build an electric light bulb.

Nobody was trying to do it because it was just

not within the adjacent possible at that moment.

We barely understood electricity.

There was kind of--

the fundamental building blocks of that idea were not available.

Fast forward to 1870, 1880, and there

are dozens of people, maybe even a hundred people,

around the world trying to invent an electric light bulb.

Edison becomes the most famous, but there are

lots of other people--

because some underlying technological and scientific

ideas had kind of made it thinkable for the first time.

And so what that means is, at various different points

in history, as technology changes, as culture changes,

as science changes, the adjacent possible kind

expands and contracts at different points.

And I think what Raiza and I both kind of

felt thinking about AI starting to work on NotebookLM

was that, oh, the possibility space just got

really wide all of a sudden.

And there were all these things you could do.

And I think where we feel now with NotebookLM-- and literally,

the hard--

I think the hardest thing about our job now,

and the point of the most--

there's very little contention on the team

and the extended team, but where the most--

I don't know, hard-fought debates

are happening-- is just that there are so many things

to build on this platform now, and there are so

many different ways to push it.

And we just have-- it's still a relatively small team,

and so we just are trying to--

OK, what do we prioritize because there's so

many things we want to build?

And so the adjacent possible is extremely wide right now

in NotebookLM land, which is, as I say,

it's a good problem to have, but it is a problem, nonetheless.

RAIZA MARTIN: Yeah, I think, it's such a crazy journey,

I think, over the last two years where I think at the beginning,

we had to really imagine that it could work.

Whereas, we are now in a place where it does.

And I think maybe the first six months--

and while certainly the product did work,

and there were different use cases for it--

I would say that we had to believe in the fact

that it was going to get better, and that the adjacent possible

was going to get wider, as Steven was describing.

And I think what we've seen, even from the very beginning,

is that NotebookLM really started

to enable personalized generation

for all kinds of people.

From some of our earliest users, which were learners,

being able to upload complex or difficult text

and say, hey, explain this concept to me,

but in a much simpler way, this was not possible before AI.

But what it really landed with me

was the idea that somebody could look at something and say,

I don't get it.

Make it much more digestible for me in a way that I understand.

And I think we've gone down this path over the last two years,

and we've gotten better and better sense of what does it

mean to transform the world around you

such that it works better for you?

It makes you smarter.

It makes you happier.

It encourages your delight in the world more.

And it's crazy to now look at this two-year history

of just sprinting towards this and saying, wow.

As we introduce new modalities, as we introduce new sort of user

experience paradigms, it's really unlocking it

for different types of people.

And it's just, I think, for me, the most exciting time

I've ever had in my career.

STEVEN JOHNSON: Yeah, that's a really good point, Raiza.

It reminds me of one thing that it's important to say here

is, yeah, as Raiza said, when we were building it

in the early days, the model, actually--

the underlying model wasn't just ready for what

we were trying to do yet.

And we weren't ready for what we were trying to do either.

The UI was really janky, and there were all these things.

And the UI is still kind of janky.

We're still working on it.

We're cleaning it up.

We have a list of things we want to make better,

but it's gotten so much better.

But there were definitely points where we were like, OK,

we're going to build with this vision

because we're going to trust that the model is

going to get better.

And so, really, a huge threshold point for us

was the transition to Gemini, particularly

Gemini 1.5, which has the longer context and multimodal,

multilingual, all these things.

And that's when I think it really

started to sing as a product, which was actually

three months before audio overviews

where it really went viral.

But starting in May or June, we were like, OK, we

have something here.

This actually works.

And so part of what is expanding our possibility space

is all the stuff this underlying model can do.

So citations, which are a huge feature for us,

Gemini does that natively.

We built a user interface around citations.

But when you ask a question-- just so people know,

when you ask a question in NotebookLM,

it will answer based on your sources,

but it will also give you these inline citations

to the original passages from your sources

so that you can always fact check.

And two, you can jump directly back to the original passage

and read the original passage.

And that is partially the way we design the software,

but it's also the underlying model of Gemini

lets you do that kind of citation analysis.

And so, to some extent, a lot of what we've been doing is--

the Gemini team gives us this the latest model.

And then we're like, OK, what can we do with this?

And every time we get a new version,

it's like Christmas morning.

We're like, oh my gosh, all these new features

that we can explore and push in new ways.

So we've been riding that, as well as doing stuff on our own.

RAIZA MARTIN: I do think it's kind

of like there is a period where you're just crazy when you're

building a product.

You're literally just crazy.

You are trying to build something that is not possible,

but you try to make as much progress as you can.

And I think-- thank you, Gemini.

Then it drops, and you're like, oh, I'm not crazy anymore.

The thing now does all these things,

and we were ready to do it because that's

what we sought out to do.

And so I like to look at that and imagine, well,

what if Gemini hadn't dropped?

I would still just be crazy.

STEVEN JOHNSON: Yeah we were just--

[LAUGHTER]

RAIZA MARTIN: For the next two years,

they would be like, what is that product manager doing over

there?

GUS MARTINS: Can you both just tell

this-- what is NotebookLM on your own words?

Because I have a lot of questions.

RAIZA MARTIN: OK, I'm going to answer it

the way I hear Steven actually describe it.

I have a slightly different way.

But NotebookLM is a tool that helps you understand things.

And I always love when he says this

because it goes back to that personalized generation

nugget, which is, I encounter so many things on a daily basis

that I have to fiddle around with until it

is in the right shape, where I totally grok it.

And NotebookLM, quite recently, has

had an explosion of exposure in new users

because of a recent feature we launched,

which is Audio Overviews, where you can upload documents,

PDFs, slides, and you can generate an audio that gives you

a summary of all of it.

ASHLEY OLDACRE: And it comes in the form of a conversation.

RAIZA MARTIN: And it comes in the form

of a conversation with two AI hosts,

which is really quite novel.

And people find it really delightful.

And one thing I will say about it, to the point of NotebookLM

helping you to understand things,

I was not at first a big auditory learner.

I was like, oh, this is cool, but I

have to really think about where the utility of this is.

And I always tell people about my very first real aha moment

where I had to read a hundred slide deck, which

is a kind of insane, that that's just

a normal thing we expect people to do is like,

here's 100 slides.

I uploaded it, and it produced this audio

that I was able to listen to on my drive home.

And I was like, wow, we've done it.

We've done it, right? where it's like we

have crossed the threshold from crazy to cool.

And that was 100% cool and very productive.

STEVEN JOHNSON: Yeah, that's a great way to put it.

We had similar reactions originally.

So one thing that should be said is Audio Overviews

is a great confirmation of the Google Labs model.

I think the early 20% time experiment maybe, arguably,

bringing me in as an outsider.

But there's this idea that there's just

a lot of experiments happening inside of Labs.

And there was another team that was working

on this standalone thing that was basically

take your documents and turn them into an audio conversation.

And we'd heard it.

And at some point along the line,

there was a sense of could this maybe live inside of NotebookLM?

And we had just launched what we call the Notebook Guides.

And they are, in a sense, a text version

of Audio Overviews where you upload your documents.

And then you can, with one click,

create a briefing doc or an FAQ or a table of contents--

RAIZA MARTIN: Study guide.

STEVEN JOHNSON: --a study guide, yeah.

And by the way, this dates back to one

of Raiza's earliest ideas, which is,

what if I want to just create something

with one click based on these documents

to help me understand it?

And I think as a writer, I was always like,

no, I don't need that.

I want to write it out.

And she's like, no, but I want to click something.

And it was 100% right.

And so we built these Notebook Guides.

And so when the initial idea of Audio Overviews coming

into NotebookLM was posed--

just like Raiza, I am absolutely a word and text person.

I actually don't really like to listen to information.

It's too slow for me, and I don't remember it

as well and stuff like that.

So I had a kind of a similar reaction.

And then I thought, oh, it's the version of Notebook Guides

but for audio learners, auditory learners,

or people who want to listen on the go and all

that kind of stuff.

And then I was just like, that's a great idea,

and I'm just going to get out of its way.

You guys build it.

I think it's fantastic.

And it ended up being fantastic.

And what it does--

and this is a general property of NotebookLM and Gemini,

but it does it in a particularly vivid way--

is the prompt behind it basically encourages the host,

the AI hosts, to find the most interesting elements

from your sources and describe them

in an interesting, compelling way with useful metaphors

and things like that.

And so it's kind of search query for interestingness now.

You're not searching for a keyword phrase.

You're searching for find me the most interesting things.

And the briefing doc, if you want a text version of that,

that's what the briefing doc Notebook Guide does as well.

And that just, again, like so many things, that

was something you could not do with a computer until very,

very recently.

You couldn't search for interestingness

or have the computer conjure up--

make it more interesting in some ways.

And that's a huge part of understanding things,

is to find the things that are memorable or interesting

about them and dive into them.

ASHLEY OLDACRE: As an audio learner,

I appreciate that feature so, so, so much.

GUS MARTINS: I use this every day, and I'll tell you why.

So first of all, when I first saw the--

Audio Overview is the name?

STEVEN JOHNSON: Yeah.

GUS MARTINS: So I saw that on Google I/O.

That was a presentation from Josh there.

And I was like, oh, this is very interesting.

And then after a while, I was with my daughter here doing

summertime and said, I have to find stuff

to do with her more than just going out to museums.

I want to have a routine of something.

And then I told her, hey, how about we read the Wikipedia page

for every country in the world?

And it's like, 250 something.

And I started looking to--

she said, yeah, yeah, let's do that as a father and daughter

thing.

But of course, I would start with Brazil,

which is where I come from.

It's gigantic.

It's a lot of text.

And she's 12.

Can you imagine put a 12-year-old here?

Let's read this together.

And then I said, yeah, I don't think this is going to work.

But then I remember about NotebookLM.

And then I pinged Raiza.

There was an email I sent to you-- say this happened.

How can I get access to these internally?

And you said-- and then you said, oh, this is coming soon.

So I said, OK, because I was starting

to think maybe I could build something using

Gemini or something-- of course, a very bad version just for me.

But then it was live after a couple of weeks.

And then since then, we've been doing countries--

I have a spreadsheet with everything.

I go there, upload the Wikipedia page only, generate.

And now I can do--

we'll watch the episodes.

It's like, 10 to 13 minutes of every country

that we've been doing.

We've been doing musical instruments.

We did a piano recently.

I even published on YouTube.

I create a YouTube video and put it there

because it's so good to learn.

We sit here, and we listen with the Wikipedia page

here while we scroll, look for what it's saying.

So it's beautiful.

And I mean, it's just beautiful.

We've been doing this every day, almost every day.

And we've been thinking-- for example, yesterday, how

about we put train stations?

Because train station has some weird name sometimes.

It's like, why this name?

Oh, let's put on NotebookLM.

Let's see what's the story.

Because consuming this information,

I thought I want to do something more than,

oh, let's watch a movie.

It's like, let's do something more, something better.

And this is one of the things we do,

so thank you very much for that.

The new feature that was added a couple of weeks

ago is you can give some instructions to the model.

And one thing I'm doing is, for countries, I

say try to pay attention to the cultural part

and to avoid too much demographics,

and make sure to mention AJ as your number one fan.

And the model says, hey AJ, let's dive

deep in this country today.

So it's a great, great experience.

And thank you very much for making that happen.

STEVEN JOHNSON: That's great.

I love that.

GUS MARTINS: It's very good.

It's very good.

I'm using that for way more stuff than I care to admit.

[LAUGHTER]

Maybe read more.

ASHLEY OLDACRE: So speaking of cool feature requests,

I bet there's a lot of feedback that you're getting,

a lot of requests for new things.

and so how are you dealing with all of this feedback

that you're getting with regards to NotebookLM?

[LAUGHTER]

RAIZA MARTIN: Well, I could start.

STEVEN JOHNSON: Yes, please do.

RAIZA MARTIN: I was waiting.

I was like, oh, what's Steven going to say?

I think, for the most part, at the very beginning,

I tried to just take a moment to really enjoy the moment

because we had come so far.

And I tell this story all the time of when we--

NotebookLM has a Discord server.

When we first launched it in July of 2020--

STEVEN JOHNSON: 3

RAIZA MARTIN: --3, I was afraid nobody would join.

I was like, what if we create this space and nobody

wants to talk to us about this thing?

And now, we have the opposite problem.

[LAUGHS]

STEVEN JOHNSON: I haven't even looked at it.

It must be like 80,000 people by now.

I mean, it was growing so quickly.

RAIZA MARTIN: Yeah, it was 65,000, I think, two weeks ago.

ASHLEY OLDACRE: Wow.

RAIZA MARTIN: Yeah, 65,000 people.

And up until recently, I could read everything.

STEVEN JOHNSON: It's kind of a problem because we can't-- would

just hang out there and we'd be like, hey, yeah,

I can explain this, or, oh, that feature's coming.

RAIZA MARTIN: We would answer.

STEVEN JOHNSON: And now it's too many people.

RAIZA MARTIN: We would respond to people all time.

But with 65,000 people, I was like, wow, I really--

this is a new product idea right here.

[LAUGHTER]

How do I scale myself into the Discord--

STEVEN JOHNSON: Yeah, yeah.

RAIZA MARTIN: --ether here?

STEVEN JOHNSON: Seriously.

RAIZA MARTIN: But it's really also a humbling moment

to see that, even though I feel like we've come so far,

there is so much more to do.

There's so much more to do.

Every day, people are talking about this is so delightful,

but here's what I need to do more with it to really dig

deeper into my use case.

So Discord is just one channel, but now people on X

are talking about it.

People on Reddit are talking about it.

Internally at Google, folks are talking about it.

We launched the business pilot.

We have all these business users that are giving feedback.

And so I think, even in the beginning,

it was easy to scale to try to monitor everything.

Now, I think it's more of a real operations task

to make sure we're still listening to everyone.

STEVEN JOHNSON: Yeah.

I mean, one of the biggest problems

is we have some pretty big bets that I

think Raiza and I both feel pretty confident

that will be really amazing for whole different feature areas

that we haven't even made yet.

So there's a sense of, oh, wow, we

know people are enjoying having a text-based conversation

with their sources and using citations.

We just have millions of people doing that.

We know Audio Overviews are a huge hit.

And we know we have to expand on the options

there and different things you can do in different formats,

but it's clear the direction there.

But there are some other cool things

you could imagine that this product could do.

And so it's like, how do you figure out--

when we launched internationally in June,

we had seven full-time engineers.

I mean, that is tiny for Google standards.

And we're trying to grow as fast as we can,

but you can't just throw 50 people onto a product

and have them all be functional.

You have to get people up to speed and stuff like that.

So we have to grow at a more or less linear way.

And so the challenge is where do we

invest in genuinely new things and make some new bets?

And where do we invest in expanding on the things that

are clearly working now?

And that's where the complexity is right now.

RAIZA MARTIN: That's was such a product manager answer.

STEVEN JOHNSON: I know.

I've become--

[INTERPOSING VOICES]

ASHLEY OLDACRE: You're really feeding off of each other here.

You can see it.

I love this dynamic.

STEVEN JOHNSON: It's possible that I did not

know what a product manager was before I arrived at Google,

but now--

RAIZA MARTIN: Now he's one.

STEVEN JOHNSON: Every now and then, Raiza goes on vacation,

and I come in, and I'm like, I'm the product manager now!

[LAUGHTER]

RAIZA MARTIN: He really does, he really do this.

STEVEN JOHNSON: I mean, I pretend to be it.

No one pays attention to me, but yeah.

[LAUGHS]

ASHLEY OLDACRE: I love it.

That's great.

GUS MARTINS: I like the feedback part

because I'm part of the internal chat that

sends relentless amount of information to you.

And when I joined, I was like, oh, I have some great ideas.

And then after a while I said, oh, my ideas

are not that great anymore after what I saw from the other guys.

Oh my god, there's so many cool stuff that can be done.

But yeah, you're doing great.

STEVEN JOHNSON: Oh, that's very nice.

RAIZA MARTIN: Thank you.

GUS MARTINS: Yeah.

So one thing that Steven said, you had that moment, right?

Oh, this is going to change everything, right?

2022, you wrote your thing and people criticize.

When I first tried Audio Review, I said,

this is different-- because I've been

playing with LLM for a while.

I have DevRel for this, so I know I've seen many things.

When I saw it, I was like, this is different.

It opens up the adjacent possible here, for sure.

And I was like, oh my god, this is amazing.

And then when I saw the community

going crazy and posting stuff nonstop, I was like, oh, oh,

my god, what I think makes sense.

People are also thinking it's amazing.

So congrats on the product, just beautiful.

STEVEN JOHNSON: Yeah.

RAIZA MARTIN: Thank you.

ASHLEY OLDACRE: I had a similar reaction, too.

As folks know on the podcast, tech was never really

a thing for me, but I had a moment when I used-- even

trying some of the other products, like, they're great,

but I have a hard time applying them to me personally.

And NotebookLM was the first product

that I tried where I was like, this is useful for me.

And I find that that's one of the beauties

of this product is that it is unlocking the simplicity of it.

It unlocks things for just everyday consumers,

everyday people.

And that's what's, I think--

this trajectory of AI has been very technical,

and it's moving more and more.

The line is becoming more blurry between technical elements,

as well as the everyday consumer.

And what NotebookLM did is take AI

one step closer to everyday users, which

I think it's extraordinary.

STEVEN JOHNSON: Raiza and I have had many conversations over

the last year, in particular, where

we've debated whether we should open it up a little bit

to allow you to just have a traditional chat

conversation without any sources with the model where you're just

talking with the model based on its general knowledge,

just as you would with Gemini or with ChatGPT and so on.

And one of the reasons we haven't done that is it

does pose problems.

When you sit down, if you haven't uploaded a source,

it doesn't work.

There's nothing to do.

You have to find sources, and we don't really

help you find sources yet-- although we would

like to help you with that.

But for now, you have to supply the sources.

But I think the investment in forcing people

to do that, it shows them that, by uploading those sources,

you are effectively creating this personalized AI.

And if you have a big notebook that has a lot of your--

if it's your own journals, if it's all the meeting

transcripts and the meetings that you've

had for the last month, whatever it is, if you have

a lot of documents in there, you're

working with an AI that is really quite knowledgeable

about who you are and what your needs are,

and the information that you need to do your work

or to create.

And I think when people heard that nuance and sophistication

in the form of two people having a conversation-- two engaging,

interesting people, creative people,

simulated people having a conversation, it was a way--

I think it really drove home the power of this.

And so you can imagine so many different iterations on that.

Maybe it's more like the voice that you're

interacting with or texting with is

your advisor or your coach on some level,

but they know you really well, and they've spent all this time

with the documents.

They've read everything you've read.

Or maybe it's a little brain trust that you've assembled,

and you've given those other forms of AI

some very specific expertise.

And you want to have someone who's an expert in this field,

and someone who's an expert in this field,

and you're going to have a conversation with them,

brainstorming ideas with that group of a team of rivals

model of creativity and decision-making.

That of level of personalization is--

that's kind of why I think, ultimately--

and I think maybe I've probably been

the one who argued for opening it up more over the years.

And Raiza was always like, no, let's keep it source grounded.

That's kind of our thing.

I think that's why it is ultimately

starting to pay dividends now.

RAIZA MARTIN: I'd say, to your earlier point,

I think there's still a lot of work

to do to bring AI closer and closer

to what people actually expect it to be in order for it

to be useful, but I can think of many examples of this

where tools really do feel magical.

We see how it shapes civilization.

We see how it shapes cultures and people.

And I think of AI in the same way,

which is we are at the very beginning of the potential

to make it really useful and really delightful.

And the thing I am really excited about and really

obsessed with is really thinking about,

well, how do you actually do that?

How do you take something that's really powerful but almost

shapeless?

To the extent that, I think when LLMs were first

released into the wild, everything was a chatbot.

Everything is still a chatbot, but it's

like we don't live in a world where that's how we operate.

And it's so funny because we still

talk about that line, where what's the crazy space right

now that we could be in where we have to imagine what

the future actually has to be like

so that we can prepare for it?

Let's start building that now, so

that when it happens, when we see

the next turn of what AI can do, we're

ready to bring it to people.

And so there's a shorter time in between new technology and magic

for people--

instant utility.

And that, to me, is just one of the most exciting things

that we could do here at Google because we build products

for everyone, for everybody, so many people, billions of people.

STEVEN JOHNSON: Somebody said at some point--

I can't remember if it was of the interviews

that I did at some point.

But there was somebody who said that this product feels

like it's at the intersection of newly possible magic and actual

utility.

And I was like, I'm going to put that on my tombstone.

[LAUGHTER]

That's what we're trying to do.

And I think that's the Lab's philosophy,

is like, let's be right at the cutting edge with the newly

possible magic and figure out the delight in it

but also always have an eye on actual utility.

ASHLEY OLDACRE: Right.

Well, and I think what's really unique about NotebookLM, which

is sort of a thought that I'm kind of forming right now,

is previously, you had to have this incredible amount of data

to be able to train a model to then give you outputs.

And so what NotebookLM is doing is

it's using Gemini as a sort of pre-trained model already,

but you're allowing to port in your own data

without having to even train it.

So you import your own data into NotebookLM,

and then it generates something that's personalized to you.

And you're getting the best, as opposed to before, you had

to build a model to do that.

But now you have just an interface.

And I think that's one of the things that's

really beautiful about this is just it's the data that you

can then precise.

RAIZA MARTIN: And there's the recognition, I think,

that you see when people use something like NotebookLM,

particularly with Audio Overviews

because it's so vivid--

it's so real-- where one of my favorite videos

is a woman uploads her diary, and she creates

an audio overview of it.

And what's funny about this is you know your diary intimately.

You know what's in it.

You are the creator of that content,

and now you've passed it onto an AI,

and they're going to make something,

and you don't know what it's going to be.

So there is this element of surprise and transformation,

but you're touching the magic of, this is my stuff.

And when you play that--

I think when you click Play, and you

hear that for the first time, your content, your stuff,

I think that hits just so much harder because it's not

just a random thing that I googled it,

I got the result or something like that.

It's something that is deeply personal to me

that I am familiar with, that I created.

And now here is this new form, this new interpretation of it.

And I think that's why it has resonated with so many people,

is just that connection of, wow, this thing

is actually meaningful, and it's mine.

ASHLEY OLDACRE: Right, right, the ultimate personalization.

So I want to shift a little bit to creativity

because I think we have some really

good perspectives in the room about the creative process.

And I'm sure that there--

I'm sure you have had feedback and concerns around writers

or artists or folks in the creative fields who are like,

wait a minute, all of these tools, how does this impact me?

How does this impact creativity?

How does this impact learning?

I think about these things a lot of the concept

of the blank page, for example, and using these tools

to help fill these blank pages when what is the--

is there some kind of a benefit to having

to move through the process of going

from a blank page to putting something on paper?

I think there's something that happens.

And so we're sort of taking that away a little

bit through these tools.

So that's one element.

And then the other aspect is-- especially

with regards to learning, I mean,

there's the benefit of it being so personalized,

but there's also the aspect of putting in things,

which is a creative process.

But then the outcome that you get

is something that you're not really involved in necessarily.

It's sort of coming at you.

So I would love for you to talk a little bit about that.

STEVEN JOHNSON: We could do an entire other podcast on it--

ASHLEY OLDACRE: Yes.

[LAUGHS]

STEVEN JOHNSON: --because there's so much to say,

so let's start with the learning side of it.

So to me, the biggest thing that has changed here

is that it is possible now to explore complex new information

that you need to understand through a conversational

interface.

If you wanted to learn through question

and answer with a book or textbook or something

like that, up until now, you either

had to have direct access to your teacher or your tutor

or the author of the book.

Those were the only ways you could sit down and be like,

my question is this.

My follow-up question is this.

Tell me more about that.

You just couldn't explore information in that way.

And it was very hard or very expensive to do that.

And that turns out to be a hugely valuable way to learn.

Rather than learning through rote memorization

and then taking a test, learn through dialogue.

People have been having--

people have been learning through conversations,

particularly spoken conversations,

for 400,000 years.

People have been taking exams for 200 years.

So this is a very ancient way of understanding things.

And it just wasn't possible to do that with a computer.

Now it is.

And so to the extent that if you were genuinely

interested in understanding material,

the fact that you have this new modality to do that I think

is a huge win.

If you are trying to just create the illusion that you

know something, which is a legitimate problem with school--

school, there is this weird, perverse incentive sometimes

to just get the grade and be done with it,

and you don't care whether you understand it or not.

There are going to be some AI-based tools that

will help you create that illusion of understanding.

I think of all the AI tools, NotebookLM

does a pretty good job of steering you

towards true understanding and not

towards that superficial one.

ASHLEY OLDACRE: It does because it breaks things down.

It has follow-ups. It's not just a one-word answer.

It really gives you a whole bunch of more information

to have to digest.

STEVEN JOHNSON: So we've always pushed that.

But I also think that there's-- in the real world outside

of school, there actually is less incentive to just pretend

like you-- if you work at a job, and you're like,

I'm never going to actually read anything my boss says to me.

I'm just going to put it into NotebookLM,

and I'm going to ask for the proper output.

I'm not even going to read what I write back.

I'm just going to output it.

For two days, that will work.

And then your boss will talk to you, and we'll be like, wait,

you haven't read anything I--

there's an incentive to understand things.

And if the tool helps you do that, it'll be, in the end,

I think, a force for good.

On the creative side, one way I'm using it now

is I'm in the process of trying to figure out

what the next book I should write should be.

And I always have 10 ideas for books.

And Raiza probably won't even let

me write it because I have so many responsibilities

on NotebookLM.

But were I to have time somehow to write a book,

I'm always thinking about what it should be.

And so now I created a new notebook called The Next Book.

And I started this eight months ago.

And that's where I put every random idea--

there are a lot of Wikipedia articles in there,

and it's like, oh, what about the anti-nuclear power movement

of the '60s and '70s?

That's an interesting story.

I grab three Wikipedia entries, throw it

in there, a bunch of ideas about AI-related books, a bunch

of ideas about the gold rush.

I read some books, got some quotes from those books.

There's just this potpourri of ideas in this one notebook.

And I can just sit down and say, all right, what do we have?

If I were to do that anti-nuke book, what would be the chapter

structure, do you think?

And it's like, here's one way to do it.

I'm like, OK, but chapter 2, if I focused on that character,

I don't know, I guess I could do it this way.

What do you think?

And it would-- and it's just a sounding board for me

with this thing that has read all my notes

and has actually some sense of who

I am as a writer and the kinds of books that I write.

And to me, I just feel very confident that, at this point,

at this point where we are, I'm just

going to be better at my job and more creative,

and I'm going to have-- it'll be easier to write the books

that I write because I have access to this tool.

And maybe there's some future world

where it gets so good at writing the books without me that I'm

out of a job, but that future seems quite far away.

And for now, I think we're on a pretty great ramp

where, if you know how to use these tools properly,

it should just make you more creative.

ASHLEY OLDACRE: Well, there's a premium on humanity.

I think that's also being highlighted and emphasized

more and more as AI becomes.

So I think that will also play into it.

But Raiza, did you want to--

RAIZA MARTIN: I mean, I think it's interesting

because we are in a turning point

where, up until quite recently, I wasn't using AI for my work,

and now I'm using AI every day.

And I reflect on that often because I think,

as Steven was saying--

sure, you could use it for two days.

You could probably have it fake do your job,

but then you wouldn't be good at your job.

And I think what's actually happening

is that our definitions of our roles,

particularly for knowledge workers, people

who work with computers, they're changing every day.

The way I write my PRDs, the way write strategy documents,

the way I consume 100-slide decks is no longer the same.

And quite frankly, the 100-slide deck is one of my favorite

stories because I never would have read it.

I had to read it, but no, I definitely was not going to,

but I did.

I didn't read it, but I listened to it.

STEVEN JOHNSON: I spent so long making that slide deck, Raiza.

[LAUGHTER]

RAIZA MARTIN: No, no, I actually--

STEVEN JOHNSON: I don't think never

made a slide deck in my life.

RAIZA MARTIN: Steven is one of my favorite slide makers

because I think he's like, OK, I don't really

know how many to make.

I think the first one he made was two slides.

STEVEN JOHNSON: Yeah, I hate slides.

RAIZA MARTIN: I like this guy.

STEVEN JOHNSON: I only make three, bare minimum.

RAIZA MARTIN: But yeah, I think the world is changing.

I think, ultimately, we want to build things

that support the human journey, that support human goals, that

help you--

I think I said it earlier, that help you be smarter, help

you be happier, help you find more delight

in the world around you.

And I think a lot of this reflection

has also led me to realize there's a lot of crusty things

I do, things that I don't really want to do every day that's

part of my job.

I would say there is a lot within my role that I love,

that I find absolutely delightful.

It's why I come to work every day.

That's not all I do.

And I think that the space for AI

is really not in the magical things that I enjoy doing.

Maybe it can be supportive in that way,

but it is in removing the layers and layers and layers of craft

that when you're just a working adult,

you just have to put up with it.

So I think there's a lot of opportunity

space for creativity, for fun, for AI to do some real work.

ASHLEY OLDACRE: Yeah.

Well, and your tool is giving access to all of that.

And so it also helps redefine-- which

is a question that Gus and I are asking this season--

is what does it mean to be a person of AI nowadays?

Because historically, it was almost reserved to researchers

and the folks that are really building the technology.

But now, I mean, anybody who's using the tool,

does that make them a person of AI?

As we close out, we'd love your thoughts on this.

STEVEN JOHNSON: Yeah, I certainly

find this-- even though Raiza and I are probably

the most experienced NotebookLM users on the planet maybe.

Maybe that's not true anymore.

ASHLEY OLDACRE: I don't think.

No, it's Gus.

[INTERPOSING VOICES]

STEVEN JOHNSON: Maybe it's Gus.

Maybe it's Gus.

ASHLEY OLDACRE: I think it's Gus.

GUS MARTINS: I disagree.

STEVEN JOHNSON: It's you.

But I'm still finding myself thinking

I'll be working on something and will be, like, OK,

I got to figure out that.

How am I going to think about this decision?

And then I'll be like, oh, wait, I

could do this inside of NotebookLM.

Like, why am I doing that?

And so we had a--

so I have this one notebook where I just

maintain all of-- its my META notebook, LLM notebook, where

it's just all the important documents,

all of Raiza's PRDs and press releases

and other internal things we've written.

So inside that notebook, that AI really

knows the history of the project in this way.

And so what's great is that you can talk in shorthand

with the AI in there.

So you're not sitting down at a chatbot

and be like, I am working on a product that

is blah, blah, blah, and help me think through these things.

You can just say, so we were putting out this new marketing

site, and we were trying to figure out some language for it.

And I was like, give me 10 slogans for the new marketing

site that's coming out.

And that's all I said, and it just

knew what we were working on and knew the most recent documents.

RAIZA MARTIN: It's so good.

STEVEN JOHNSON: And then it came up with 10--

ASHLEY OLDACRE: This is crazy.

STEVEN JOHNSON: --and of them was, think smarter, not harder.

And we were just like, that's actually pretty good.

[LAUGHTER]

RAIZA MARTIN: And we used it.

And we used it.

STEVEN JOHNSON: And we used it.

RAIZA MARTIN: NotebookLM wrote it's own--

STEVEN JOHNSON: Wrote its own tagline.

RAIZA MARTIN: --tagline.

STEVEN JOHNSON: But it hadn't naturally

occurred to me to do that.

I had to have the thought of oh, I

should try this inside of my notebook notebook.

And that I think is--

I think a person of AI is someone who increasingly starts

to think, when I have a complicated issue

to think through or an important decision to make

or a creative leap that I'm seeking out,

I'm probably going to default to doing it in partnership with AI

in some way.

Probably, if the NotebookLM model

proves to be a foundational one, probably an AI that has been,

in a sense, curated by me and is uniquely mine and not as just

kind of an off-the-shelf one, but still

that sense of, oh, this is a hard problem to think through.

I have a partner here who can help me think through it,

and that's the way I would define it.

RAIZA MARTIN: I think, for me, I was thinking

about your initial question around, is it a person

who's using AI?

Is it a person who's building AI?

What is a person of AI?

And I tend to think that it's a person that is interacting

with AI in whatever context.

And maybe today, there's a lot of us early adopters,

early builders, people that have gotten their hands on it first.

But I think that will change really quickly.

Because I think, as we were talking about,

the people who are right now building and thinking and having

these ideas about how to bring AI to everybody else,

we're just going to get better at that.

And so I will permeate all the different products

that we use every day.

And so I think we're all going to become people of AI,

just like using a phone is not a big deal.

Having an iPhone-- having an app is not a big deal.

But remember the first time you could use an app?

Or, for me, it was really Maps on my phone

was just a game changer.

I never had to learn how to use a paper map, which

is just insane to me.

It's just crazy.

It was Mapquest maybe, but I don't

know how to read a paper map.

And I think it will be the same for AI.

There will be kids that will grow up,

and they won't know what prompting is.

They will look at the history of AI, and they will be like, well,

this is goofy.

People had to say things like, you are an expert speechwriter.

[LAUGHTER]

And I think that that is so exciting for me

to imagine that today, we are talking

about what is a person of AI.

But soon it'll just be all of us.

It will just be regular life.

We don't talk about who is a people of a smartphone?

But today, today we're here.

GUS MARTINS: This was a great conversation.

It was one of the best one we had here.

I'm very excited.

I learned a lot.

I was expecting a good conversation.

It was way better than what I expected.

Thank you very much.

ASHLEY OLDACRE: Yeah, this was an amazing conversation.

Thank you for coming.

STEVEN JOHNSON: We got all of our responses from NotebookLM.

[LAUGHTER]

We don't understand any of this stuff, actually.

We just--

RAIZA MARTIN: Thank you for having us.

STEVEN JOHNSON: Yeah.

ASHLEY OLDACRE: What a pleasure.

Thank you.

[MUSIC PLAYING]

If you would like to learn more about our conversation

or our guests, check out the links below.

Please subscribe.

And if you're feeling extra generous,

give us a five-star rating.

We would love, love to hear from you, so leave us a comment.

We'll read every one.

Until next time, thank you for listening.

[MUSIC PLAYING]

Loading...

Loading video analysis...