Google DeepMind C.E.O. Demis Hassabis on Living in an A.I. Future | EP 137
By Hard Fork
Summary
## Key takeaways - **Google IO: AI Everywhere, Not Just a Feature**: Google's latest developer conference, Google IO, signaled a significant shift in their AI strategy, moving beyond isolated features to integrating AI across all their products, aiming to enhance user lives and demanding user engagement. [02:09:00], [16:28:00] - **Search Reimagined: AI Mode vs. Traditional Search**: Google is introducing 'AI Mode' in search, a tab offering conversational, multi-step queries similar to ChatGPT, distinct from the standard search with AI overviews and the standalone Gemini app, aiming for a cleaner, more interactive search experience. [04:55:00], [05:58:00] - **Gemini's Rapid Growth: 400 Million Users**: Google's Gemini app has achieved significant traction with 400 million monthly users, surpassing other AI chatbots and indicating strong user adoption and perceived utility beyond passive integration into existing products. [08:37:00], [09:11:00] - **AI's Creative Spark: Hallucination as Innovation**: In creative applications like Alpha Evolve, 'hallucination' in AI models can be a valuable tool, generating novel ideas and exploring new possibilities that might be missed by purely factual or conventional approaches. [41:47:00], [42:04:00] - **AGI Timelines: Brin's Optimism vs. Hassabis' Caution**: While Google co-founder Sergey Brin predicts AGI before 2030, Demis Hassabis maintains a more cautious 'just after 2030' timeline, emphasizing the need for higher bars in invention, consistency, and true generality, not just incremental improvements. [31:14:00], [32:25:00] - **Future Skills: Adaptability in the Age of AI**: As AI becomes more integrated, future success will depend on 'meta skills' like learning to learn, creativity, adaptability, and resilience, enabling individuals to navigate constant change and leverage AI as a 'superpower'. [52:53:00], [53:36:00]
Topics Covered
- Google's AI Fire Hose: A Fever Dream of Innovation
- AI is Coming to Everything, Everywhere, All at Once
- Google Search's New AI Mode: A Cleaner, Smarter Experience?
- Skills for the AI Generation: Embrace Tools, Master Fundamentals, and Cultivate Meta-Skills
- The 'Soul' Gap: Why Human Art and Connection Remain Unique
Full Transcript
This year, Google talked about AI very
differently. This time, they want you to
sit up. They want you to lean in. They
want you to pay them $250. And they want
you to get to work. I've been working
every hour there is for the last 20
years because I've felt the how
important and momentous this this
technology would be, whether it's 5
years or 10 years or 2 years that
they're all actually quite short
timelines when you're talking discussing
what the the enormity of the
transformation of this techn you know
this technology is going to bring. when
you when I see a Van Go, you know, hairs
going up the back of my my my spine
because of I remember what they went
through and um the struggle to produce
that right in every brush stroke of Van
Go's brush strokes. Even if the AI
mimicked that and you were told that it
was like so what?
[Music]
Now there's a very large what looks like
a circus tent over there. What do you
think's going on in there? That is Shane
amphitheater. Oh, that's the
amphitheater under that tent yesterday.
I thought that was just some carnival
that they were setting up for employees.
Okay, my mistake. I thought Ringling
Brothers had entered into a partnership
with the Google. It's a revival tent.
They're bringing Christianity back.
I'm Kevin Roose, a tech columnist at the
New York Times. I'm Casey Newton from
Platformer and this is Hardfork. This
week, our field trip to Google. We'll
tell you all about everything the
company announced at its biggest show of
the year. Then Google DeepMind CEO Demis
Hassabis returns to the show to discuss
the road to AGI, the future of
education, and what life could look like
in 2030. Kevin being very old for
starters. Somebody did ask me, text me
to ask why I freaking yell the name of
the show every episode. And did you say
it's cuz I started yelling my name? I
said it's because of the cold brew.
Well, Casey, our decor is a little
different this week. Mhm. It's I'll say
it. It looks better. Yes. We are not in
our normal studio in San Francisco. We
are down in Mountain View, California
where we are inside Google's
headquarters. I'm just thrilled to be
sitting here surrounded by so much
training data. That's what they call
books here at Google.
So, we are here because this week is
Google's annual developer conference
Google IO. There were many, many
announcements from a parade of Google
executives about all the AI stuff that
they have coming. And uh we are going to
talk in a little bit with uh Dennis
Sabis who is the CEO of Google DeepMind
uh essentially their AI division who's
been driving a lot of these AI projects
forward. Um but first let's just sort of
set the scene for people uh because I
don't think we have ever been together
at an IO before. So what is it like? So
Google IO has a bit of a festival
atmosphere. It takes place at the
Shoreline Amphitheater which is a
concert venue. Uh, but once a year it
gets transformed into a sort of nerd
concert where instead of seeing
musicians perform, you see Google
employees vibe coding on stage. Yes
there was a vibe coding demo. Um, there
were many other things. I I did actually
see as I was leaving the uh Google ac
capella group, Google was like sort of
doing their warm-ups in anticipation of
doing some concert. So, you've got some
like old school Google vibes here, but
uh also a lot of excitement around all
the AI stuff. So now I didn't see Google
Pella perform. Where was this
performance? I didn't see them perform
either. I just saw them warming up. They
were sort of doing their scales. They
sounded great. You know what? I bet it
was a classic a capella situation where
they warmed up and someone came up to
them and they said, "Please don't
perform."
All right, Kevin. Well, before we get
into it, shall we say our disclosures?
Yes. I work for the New York Times
which is suing OpenAI and Microsoft uh
over copyright violations related to
training of AI systems. And my boyfriend
works at Anthropic, a Google investment.
Oh, that's right. Yeah. So, let's talk
about some of what was announced this
week. There was so so much. We can't get
to all of it, but uh what were the
highlights from your perspective? Well
so look, I wrote a column about this
Kevin. I felt a little bit like I was in
a fever dream at this conference. You
know, I think often it is the case at a
developer conference where they'll sort
of try to break it out into one, two
three big bullet points. This one felt a
little bit like a fire hose of stuff.
And so by the end I'm looking at my
notes saying, "Okay, so email's gonna
start writing in my voice and I can turn
my PDFs into video TED talks. Sure, why
not?" Um, so I had a little bit of fever
of dream mentality. What was your
feeling? Yeah, I I told someone
yesterday that I thought the name of the
event should have been everything
everywhere all at once. Like that did
actually feel like what they were saying
is like every Google product that you
use is going to have more AI. that AI is
going to be better and is all going to
make your life better in various ways.
Uh, but it was a lot to keep track of.
Yeah. I mean, look, if we were going to
try to to to pull out one very obvious
theme from everything that we saw, it
was AI is coming to all of the things.
And it's probably worth drilling down a
little bit into what some of those
things are. Yeah. So, the thing that got
my attention and then I was sitting
right next to you. Uh the one time when
I really noticed you perking up was when
they started talking about this new AI
mode in Google search, their core search
product. So talk about AI mode and what
they announced yesterday. So Kevin, this
gets a little confusing because there
are now three different kinds of major
Google searches. I would say there is
the normal Google search which is now
augmented in many cases by what they
call AI overviews which is sort of AI
answer at the top. Yeah, that's the
little thing that will tell you like
what the meaning of phrases like you
can't lick a badger twice is. Right.
That's right. And if you don't know the
meaning of that, Google it. Um, so
that's sort of the thing one. Thing two
is the Gemini app, which is kind of like
a one for one like chat GPT competitor.
That's in its own, you know, standalone
app, standalone website. And then the
big thing that they announced this week
was AI mode, which has been in testing
for a little while. And I think this
sort of lands in between the first two
things, right? It is a tab now within
search. And this is rolling out to
everybody in the United States and a few
other countries. And you sort of tap
over there and now you can have the sort
of longer, you know, multi-step
questions that you might have with a
Gemini or a chat GPT, but you can do it
right from the Google search interface.
Yeah. And I've been playing with this
feature for a few weeks now. It was in
their labs um section, so you could try
it out if you were enrolled in that. Um
and it's it's really nice. Like it's a
very clean thing. There's no ads yet. uh
they they will probably appear soon. It
does this thing called the fan out which
u is is very funny to me. Like you ask
it a question and it kind of dispatches
like a bunch of different Google
searches to like crawl a bunch of
different web pages and like bring you
back the answer and it actually tells
you like how many searches it is doing
and how many different websites it's
doing. So I asked it for example like
how much does a Costco membership cost?
It search 72 websites for the answer to
that question. So AI mode is very very
eager to answer your question even if it
does verge on overkill sometimes. Yeah.
Well, so you know you and I had a chance
to meet with Robbie Stein who is uh one
of the people who is leading AI mode and
I was surprised by how enthusiastic
about you were like you said that you've
really actually found this quite useful
in a way that I think I have not so far.
So what are you noticing about this? I
mean the main thing is it's just such a
clean experience like on a regular
Google search results page. You and I
have talked about this like it has just
gotten very cluttered. There's a lot of
stuff there. There's ads, there's
carousels of images, there's sometimes a
shopping module, there's sometimes a
maps module. Like, it's just it's hard
to actually like find the blue links
sometime. Uh, and I imagine that AI mode
will become more cluttered as they try
to make more money off of it. But right
now, if you go to it, it's like a much
simpler experience. It's much easier to
find what you're looking for. Yeah. And
at the same time, they're also trying to
do some really interestingly complex
stuff. Like one of the things that they
showed off during the keynote was
somebody asked a question about baseball
statistics that required finding, you
know, three or four different kind of
you know, tricky to locate stats and
then combining them al together in an
interactive chart. That was just a demo.
We don't have access to that yet. But
that is one of the those things where
it's like, well, if that works, that
could be a meaningful improvement to
search. Yeah, it could be a meaningful
improvement to search. And we should
also say like it's a big unknown how all
of this will affect uh the main Google
search product, right? This is for now
it's a tab. Um they have not sort of
merged it into the main core Google
search uh in part because it's not
monetized uh yet. It costs a lot more to
serve those results than a traditional
Google search. But I imagine over time
these things will kind of merge which
will have lots of implications for
publishers, people who make things on
the internet, the whole sort of economic
model of the internet. But before we uh
get dragged down that rabbit hole, um
let's just talk about a few other things
that they uh said on stage at Google IO.
So I was really struck by the usage
numbers that they trotted out for their
products. Um Gemini, according to them
um the app now has 400 million monthly
users. Um that is a lot. That is not
quite as many as chatbt, but it is a lot
more than products like Claude and other
uh AI chatbots. They said that their
tokens that are being output by Gemini
has increased 50 times since last year.
Um, and is just like way like so people
are using this stuff. In other words
this is not just like some feature that
Google is shoving into these products
that people are trying to sort of
navigate around like people are really
using Gemini. I I think that that that's
right. And I think it's the Gemini
number in particular is the one that
struck me. Like 400 million is a lot of
people and I don't see that many obvious
ways that Google could be like faking
that stat. uh you know in contrast to
for example they said one and a half
billion people see AI overviews every
month it's like well yeah you just put
them in Google search results like
that's an entirely passive phenomenon
but like Gemini you got to go to the
website you got to download the app so
that tells me that people actually are
finding real utility there so that's
Gemini but they also released a bunch of
other stuff like new image and video
models do you want to talk about those
yeah so um you know like like the other
companies they're working on text to
image text to video and while open AAI's
models have gotten most of the attention
in this regard. Google's really are
quite good. I think the the marquee
feature uh for this year's IO is that
the video generating model VO3 can also
generate sound. So, it showed us a demo
for example of an owl flapping its
wings. You hear the wings flap. It comes
down to the ground. There's this sort of
nervous badger character and they
exchanged some dialogue which was
basically incomprehensible, just pure
sloth. But they were able to generate
that from scratch. And I guess that's
something. They left behind a a ball
today. It bounced higher than I can
jump.
What manner of magic is that? Yeah. Um
they also announced a new ultra
subscription to Google's AI products.
Now, if you want to be on the bleeding
edge uh of Google's AI offerings, you
can pay $250 a month for Gemini Ultra.
And Casey, I thought to myself, no one
is going to do this. Who is going to pay
$250 a month? That's a fortune for
access to Google's leading AI products.
And then I look over to my right and
there's Casey Newton in the middle of
the keynote pulling out his credit card
from his wallet and entering it into buy
a subscription to this extremely
expensive AI product. So, you might have
been the first customer of this product.
Why? Well, and I hope that they don't
forget that uh when it comes time to
feed me into the large language model.
Um, look, I want to be able to have the
the latest models. And you know, one I
think clever thing that these AI
companies are doing is they're saying
"We will give you the latest and
greatest before everyone else, but you
have to pay us a ridiculous amount of
money." And you know, if you're a
reporter and you're reporting about this
stuff every day, I do think you sort of
want to be in that camp. Now, is it true
that I now spend more on monthly AI
subscriptions than I paid for my
apartment in Phoenix in the year 2010?
Yes. And I don't feel great about it
but I'm trying to be a good journalist.
Kevin, please. Your family is dying. Um
another thing I that made me perk up was
uh they talked a lot about
personalization, right? This is
something we've been talking about for
years. Basically, Google has all of, you
know, billions of people's email, their
search histories, their calendars, all
their personal information, and we've
been sort of waiting for them to start
weaving that stuff in so that you can
use Gemini to do things in those
products. Um, that has been slow. Um
but they are sort of taking baby steps
and they did show off a few things
including this new personalized smart
replies feature that is going to be uh
available for uh subscribers later this
year. in Gmail so that instead of just
getting the kind of formulaic suggested
replies at the bottom of of an email
it'll actually kind of learn from how
you write and maybe it can access some
things in your calendar or your
documents and really like suggest a
better reply. You'll still have to like
hit send, but it'll like sort of
pre-populate a message for you. Yeah.
You know, I have to say I'm I'm somewhat
bearish on this one, Kevin, only because
I I I think that if this were easy, like
it would just sort of be here already
right? Like when you think about how
formulaic so much email is, it doesn't
seem to me like it should be that hard
to figure out like what kind of email
are you? Like I'm basically a two
sentence emailer, you know, that doesn't
seem like that that's hard to to mimic.
Um, so that's just like an area where
I've been a little bit surprised and
disappointed. We also know large
language models do not have large
memories. So one thing that I would love
for Gmail to do, but it cannot, is just
sort of understand all of my email and
use that to inform the tone of my voice.
But it can't do that. It can only take a
much more limited subset. Is that going
to make it sort of difficult to
accurately mimic my tone? I don't know.
So, what I'm trying to say here is I
think there's a lot of problems here and
my expectations are like pretty low on
this one. Yeah, that was the part where
I was like I I will believe that this
exists and is good when I can use this
but as with other companies like like
Apple, uh, which demoed a bunch of AI
features at its developer conference
last year and then never launched half
of them. Um, I have become like a little
bit skeptical until I can like actually
use the thing myself. Yeah, it really is
amazing how looking back, last year's
WWDC was just like a movie about what a
competent AI company might have done in
an alternate future. It had very little
bearing on our reality, but it was
admittedly an interesting set of
proposals. Okay, so that is the the
software AI portion of IO. There was
also a a demo of a new hardware product
that Google is working on, which are
these uh Android XR glasses. Basically
their version of what Meta has been
showing off. It's Orion glasses where
you have a pair of glasses. They have
like sort of chunky black frames.
They've got like sort of a hologram lens
in them and you can actually like see a
little like thing overlaid on your
vision uh telling you, you know, what
the weather is or what time it is or
that you have a new message or they have
this integration with Google Maps that
they showed off where you can like it'll
like show you, you know, the little
miniature Google map right there inside
your glasses and it'll sort of turn as
you turn and tell you where to go. Um
they did say this is a prototype, but
um, what did you make of this? Well, I
think a lot of it looked really cool.
Like probably my favorite part of the
demo was uh, when the person who was
demonstrating looked down at her feet
cuz she was getting ready to to walk to
a a coffee shop and the Google map was
actually projected at her feet and so
she know, okay, go to the left, go to
the right. If you've ever been walking
around a sort of foreign city and and
desperately wanted this feature, I think
you would see that and be pretty
excited. What did you think? Um, yeah. I
I thought to myself, Google Glass is
back.
It was away for so long in the
wilderness and now it's back. And it
might actually work this time.
Absolutely. I did get to try the the
glasses. There was a very long line for
the demo, but um I Let me guess. You
said, "I'm Kevin Roose. Let me in the
front of the line." No, they made me
wait for 2 hours. I mean, I didn't
literally wait for two hours. I went and
did some stuff and then came back. But I
got my demo. It was like 5 minutes long.
Um, and it was uh, you know, it was it
was pretty basic, but it is cool. Like
you can look now look around and you can
say, "Hey, what's this plant?" And it'll
sort of Gemini will kind of like look at
what you're seeing and tell you what the
plant is. Totally. I I did a demo um a
few months back and also like really
enjoyed it. Um so I think there's
something here. And I think more
importantly, Kevin, consumers now when
they look at Google and Meta, they
finally have a choice. Whose advertising
monopoly do I want to feed with my
personal data? And you have consumer
choice now. And I think that's
beautiful. And that's what capitalism is
all about. So, okay, those are some of
the announcements, but what did you make
of the sort of overall tenor of the
event? What stuck out to you as far as
the vibe? So, the thing that stuck out
to me the most was just contrasting it
with last year's event. Because last
year they had this phrase that they kept
repeating, let Google do the Googling
for you, which to me put me in the mind
of somebody sort of leaning back into
your like floating chair from the Wall-E
movie and just sort of letting the AI
like run rough shot over your life. This
year, Google talked about AI very
differently. This time, they want you to
sit up. They want you to lean in. They
want you to pay them $250, and they want
you to get to work. You know, AI is your
your superpower. It's your bionic arm
and you're going to use it to get sort
of further and farther than ever before.
But even while presenting that vision
Kevin, they were also very much like
but it's it's going to be normal. It's
going to be chill. It's going to be kind
of like your life is now. You're still
going to be in the backyard with your
kids doing science experiments. you're
still going to be planning a girl's
weekend in Nashville, right? There was
not really a lot of science fiction
here. There was just a little bit of
like, oh, we uh we put a little bit of
AI in this. So, that was interesting to
me. Yeah. So, I had a slightly different
take, which is that I think Google is
being AGI pilled. Um, you know, for
years now, Google has sort of distanced
itself from the conversation about AGI.
You know, it had Deep Mind, which was
sort of its its AGI division, but they
were over in London and they were sort
of a separate thing. Um, and people at
Google would sort of not laugh exactly
but kind of chuckle when you asked them
about AGI. It just didn't seem real to
them or it was so remote that it wasn't
worth considering. They would say, "What
does this have to do with search
advertising?" Exactly. So now, you know
it's still the case that this is a
company that wants you to think about it
as a product company, a search company.
They're not like going all in on AGI
but once you start looking for it, you
do see that the the the sort of culture
of uh AI and how they people at Google
talk about AI has really been shifting.
It is it is starting to seep into
conversation here in a way that I think
is uh unusual and maybe indicative that
the technology is just getting better um
faster than even a lot of people at
Google were thinking it would. So, I
don't totally agree with you, Kevin
because while I'm sure that they're
having more conversations about AGI here
than they were a year ago, when you look
at what they're building, it doesn't
seem like there's been a lot of rip it
up and start again. It seems a lot like
how do we plug AI systems into Google
shaped holes? And maybe that will
eventually ladder up to something like
AGI, but I don't think we've seen it
quite yet. The other observation I would
make is that I think the Google of 2025
has a lot more swagger and confidence
when it comes to AI than the Google of
2024 or 2023. I mean two years ago um
Google was still trying to make Bard a
thing and I think they were feeling very
insecure that that OpenAI had beaten
them to a a consumer chatbot that had uh
found some mass adoption. Um, and so
they were just playing catch-up. And I
don't think anyone would have said that
Google was in the lead when it came to
generative AI just a few years ago. But
now they they feel like there is a race
and that they are in a good position to
win it. They were talking about how
Gemini stacks up well against all these
other models. It's at the top of this
leaderboard LM Marina for all these
different
categories. I don't love the way that AI
is sometimes covered as if it were like
sports. you know, who's up, who's down
who's winning, who's losing. But I do
feel like Google has the confidence now
when it comes to AI of a team that like
knows it's going to be in the playoffs
at least. And that was evident. Oh
yeah. I mean, well, when you look at the
competition, just what's happened over
the past year, you have Apple doing a
bunch of essentially fictional demos at
WWDC, and you have Meta cheating to win
at LM Arena, making 27 different
versions of a model just to come up with
one that would be good at one thing
right? So I think if you're Google
you're looking at that and you're
thinking I could be those guys. So that
is um that is what it felt like inside
Google IO. Um what was the reaction from
outside? I noticed that for example the
company's stock actually fell like not
not by a lot but like you know to a
degree that suggested that Wall Street
was kind of meh on a lot of what was
announced but what was the reaction like
outside of Google? I think the external
reaction that I saw was just struggling
a little bit to connect the dots, right?
Like that is the issue with announcing
so many things during a 2-hour period is
sometimes people don't have that one
thing that they're taking away saying, I
can't wait to try that. And when you're
just looking at a bunch of Google
products that you're already using, I
think if you're an investor, it's
probably hard to understand, well, I
don't understand why this is unlocking
so much more value at Google. Now, maybe
millions of people are going to spend
$250 a month on Gemini Ultra, but unless
that happens, I can understand why some
people feel like, hm, this feels a
little like the status quo. Yeah, I see
that. I also think there are like many
unanswered questions about how all of
this will be monetized. And, you know
it's Google has built one of the most
profitable products in the history of
capitalism in the Google search engine
and the advertising business that
supports it. Um, it is not clear to me
that whatever AI mode becomes or
whatever AI features it can jam into
search, if search as a category is just
declining across the board, if people
are not going to google.com to look
things up in the way they were a few
years ago, um, I think it's an open
question like what the next thing is and
whether Google can can seize on it as
effectively as they didn't with search.
Well, well, I think that they gave us
one vision of what that might be, and
that is shopping. A significant portion
of the keynote was devoted to one
executive talking about a new shopping
experience inside of Google where you
can take a picture of yourself, upload
it, and then sort of virtually try
things on and it will sort of use AI to
understand your proportions and, you
know, accurately map a a garment on to
you. And there was a lot of stuff in
there that would just sort of let Google
take a cut, right? Obviously, you can
advertise the individual thing to buy.
Maybe you're taking some sort of like
cut of of the payment. There's an
there's an affiliate fee that that is in
there somewhere. So, one of the things
I'm trying to do is I cover Google going
forward is understanding that yes
search is the core, but there but Gemini
could be a springboard to build a lot of
other really valuable businesses. An
important question I know that I always
ask you when I go to these things. How
was the food? Let's see. I think the
food was really nice. So, here's the
thing. Last year it was a purely savory
experience at breakfast and I am
shamefully an American who likes a
little sweet treat when I woke up. This
year they had both bagels and an apple
cinnamon coffee cake and so when I was
heading into that keynote I was in a
pretty good
mood. I had some of the they have like
little bottles of cold brew and I I'm
like a huge caffeine addict so I I took
two of them. Um and boy I was on rocket
fuel all day. I was just humming around.
I was like bouncing off the wall. I was
like doing parkour. I was like I was
feeling great. I thought I saw you
warming up with the capella team. Now it
all makes sense.
When we come back, we'll talk with
Deisabis, CEO of Google Deep Mind, about
his vision of the AI future.
[Music]
Deis, welcome back to Hardfork. Thanks
for having me again. A lot has happened
since the last time you were on the
show. Um, most notably, you won a Nobel
Prize. Congrats on that. Um, ours must
be still in the mail. Can you put in a
good word for next year with the
committee? I will do. I will do. I
imagine it's very exciting to win a
Nobel Prize. I know that have been a
goal for a long time of yours. Um, I
imagine it also leads to like a lot of
people giving you crap like during
everyday activities like if you're, you
know, struggling to work the printer and
people are just like, "H, oh, Mr. Nobel
Laur does that happen?" Um, a little
bit. I mean, look, I tried to say
"Look, I can't, you know, that maybe
it's a good excuse to like not have to
fix uh those kinds of things, right?"
So, it's more shield.
Um, so you just had Google IO and it was
really the Gemini show. I mean, I think
Gemini's name was mentioned something
like 95 times in the keynote. Of all the
stuff that was announced, what do you
think will be the biggest deal for the
average user?
Wow. Well, I mean, we did announce a lot
of things. I think for for the average
user, I think it's the new powerful
models and I hope uh uh this Astrotype
technology coming into Gemini live. I
think it's really magical actually when
people use it for the first time and
they realize that actually AI is capable
already today of doing much more than
what they thought. Uh and then I guess
V3 was the big uh uh the biggest
announcement of the show probably and
seems to be going viral now and that's
pretty exciting as well I think. Yeah.
One thing that struck me about IO this
year compared to previous years um is
that it seems like Google is sort of
getting AGI pill as they say. Um I
remember interviewing people
researchers at Google even a couple
years ago and um there was a little
taboo about talking about AGI. They
would sort of be like, "Oh, that's like
Demis and his deep mind people in
London. That's sort of like their crazy
thing uh that they're excited about, but
here we're doing like, you know, real
research." Um but now you've got like
senior Google executives uh talking
openly about it. What explains that
shift? I think the sort of AI part of
the of the equation becoming more and
more central like I sometimes describe
uh Google deep mind now as the engine
room of Google and I think you saw that
probably in the keynote yesterday really
if you take a step back um and then it's
very clear uh I think you could sort of
say AGI is maybe the right word that
we're quite close to this uh human level
general intelligence u maybe closer than
people thought even a couple of years
ago and it's going to have broad
crosscutting impact and I think it's
another thing that you saw at the
keynote, it's sort of literally popping
up everywhere because it's this
horizontal layer that's going to
underpin everything and I think everyone
is starting to understand that and um
maybe a bit of the deep mind ethos is
bleeding into the into the general
Google which is which is great. You
mentioned um that project Astra is
powering some things that maybe people
don't even realize that AI can yet do. I
think this speaks to a real challenge in
the AI business right now, which is that
the models have these pretty amazing
capabilities, but either the products
aren't selling them or the users just
sort of haven't figured them out yet.
So, how are you thinking about that
challenge and how much do you bring
yourself to the product question as
opposed to the research question? Yeah
it's great great question. I mean I
think um one of the challenges I think
of this space is obviously the
underlying tech is moving unbelievably
fast and I think that's quite different
even from the other big revolutionary
techs internet and mobile at some point
you get some sort of stabilization of
the tech stack so that then the you know
the focus can be on product right or or
exploiting that tech stack and what
we've got here which I think is very
unusual but also quite exciting from a
researcher perspective is that the the
tech stack itself is evolving incredibly
fast as you guys know. So I think that
makes it uniquely challenging actually
on the product side. um not just for us
at Google and Deep Mind, but for
startups, for for anyone really, any any
any uh company, small and large, is
where do you what do you bet on right
now when that could be 100% better uh in
a year as we've seen and and so you've
got you've got this interesting thing
where you need kind of fairly um deeply
technical sort of product people
product designers and managers I think
to in order to sort of intercept where
the technology may be in a year. So
there's things it can't do today and you
want to design a product that's going to
come out in a year. So you've got a kind
of you got a pretty deep understanding
of the tech and where it might go to to
sort of work out what features you can
rely on. And so it's it's it's an
interesting one. I think that's what
you're seeing so many different things
being tried out and then if something
works we got to really double down
quickly on that. Yeah. During your
keynote, you talked both about Gemini as
powering both uh sort of productivity
assistant style stuff and also
fundamental uh science and and research
challenges and I wonder in your mind is
that the same problem that sort of like
one great model can solve or are those
sort of very different problems that
just require different approaches? I
think you know when you look at it it
looks like an incredible breadth of
things which is true and how are these
things related uh other than the fact
I'm interested in all of them but is
that uh that was always the idea with
building general intelligence you know
truly generally and and and and this in
this way that we're doing it should be
applicable to almost anything right that
being productivity which is very
exciting help billions of people in
their everyday lives to cracking some of
the biggest problems in science um 90% I
would say of it is is the underlying
core general models uh you know in our
case Gemini especially 2.5 and the in
most of these areas you still need
additional applied research or some a
little bit of um special casing from the
domain maybe it's special data or
whatever um to tackle that problem and
you know maybe we work with domain
experts in in the scientific uh areas uh
but underlying it you all the when you
crack one of those areas you can also
put those learnings back into the
general model and then the general model
gets better and better. So, it's a kind
of very interesting flywheel and um it's
great fun for someone like me who's very
interested in many things. You get to
use this technology and sort of um uh uh
uh go into almost any field that you
find interesting. A thing that a lot of
AI companies are wrestling with right
now is how many resources to devote to
sort of the core AI push on the
foundation models, making the the models
better at the basic level versus how
much time and energy and money do you
spend trying to spin out parts of that
and commercialize it and turn it into
products. And I I imagine this is both
like a resources challenge but also like
a a personnel challenge because say you
join Deep Mind as an engineer and you
want to like build AGI and then someone
from Google comes to you and says like
we actually want your help like building
the shopping thing that's going to like
let people try on clothes. Is that a
challenging conversation to have with
people who joined for one reason and
maybe asked to work on something else?
Yeah. Well, we we don't you know it's
sort of self- selecting internally. We
don't have to that's one advantage of
being quite large. there were enough
engineers on the product teams and the
product areas, you know, that can deal
with the the product development product
and the researchers if they want to stay
in core research that they're absolutely
that's fine and and we need that. Um but
actually you'll find a lot of
researchers are quite um motivated by
real world impact be that in medicine
obviously and and things like isomorphic
but also um uh to to have billions of
people use their research. it's actually
really motivating and so there's plenty
of people that like to do both. So, um
yeah, we don't there's no need for us to
sort of have to pivot people to certain
things. Um you did a panel yesterday
with uh Sergey Brin, Google's
co-founder. Yeah. Um who has been
working on this stuff back in the office
and uh interestingly he has shorter AGI
timelines than you. um he thought AGI
would arrive before 2030 and you said
just after. He actually accused you of
sandbagging, basically like artificially
pushing out your estimates so that you
could like underpromise and overd
deliver. Um but I'm curious about that
because you will often hear people at
different AI companies arguing about
when the timelines are, but presumably
you and Sergey have access to all the
same information and the same road maps
and you you understand what's possible
um and what's not. So what is he seeing
that you're not or vice versa that leads
you to different conclusions about when
AGI is going to arrive? Uh look, well
first of all, there wasn't that much
difference in our timelines if he's just
before 2030 and I'm just after. Also
our my timeline's been pretty consistent
since the start of Deep Mind in 2010. So
we thought it was roughly a 20-year
mission and amazingly we're on track. So
it's it's somewhere around then, I would
think. And I and I I feel like between I
actually have obviously a probability
distribution of you know where the ma
most mass of that is between five and 10
years from now. And I think partly it's
to do with predicting anything precisely
5 to 10 years out is very difficult. So
there's uncertainty bars around that.
And then also um there's uncertainty
about how many more breakthroughs are
required right and also about the
definition of AGI. I have quite a high
bar which I've always had which is it's
it it should be able to do all of the
things that the human brain can do right
even theoretically and so that's that's
a higher bar than say what the typical
individual human could do which is
obviously very economically important
and that would be a big milestone but
not in my view enough to call it AGI. Um
and and we talked on stage a little bit
about what is missing from today's
systems. Sort of true out of the box
invention and thinking um sort of
inventing a conjecture rather than just
solving a math conjecture. Solving one's
pretty good but actually inventing like
the reman hypothesis or something as
significant as that that mathematicians
agree is really important is very is
much harder. Um and also consistency. So
the consistency is a requirement of
generality really and you should it
should be very very difficult for even
top experts to find uh flaws especially
trivial flaws in the systems which we
can easily find today and you know the
average person can do that. So there's a
sort of capabilities gap and there's a
consistency gap before we get to what I
would consider AGI. And when you think
about closing that gap, do you think it
arrives via incremental two 5%
improvements in each successive model
just kind of stacked up over a long
period of time? Or do you think it's
more likely that we'll hit some sort of
technological breakthrough and then all
of a sudden there's liftoff and we hit
some sort of intelligence explosion? I I
think it's I think it could be both. and
and and I and I think for sure both is
going to be useful which is why we push
unbelievably hard on the scaling and the
you know what you would call incremental
although actually there's a lot of
innovation even in that to keep moving
that forward pre-training post-training
inference time compute all of that stack
so there's actually lots of exciting
research and we showed some of that that
diffusion model um the deep think model
um so we're innovating at all parts of
that the traditional stack should we
call it and then on top of that we're
doing uh more green field things, more
blue sky things like Alpha Evolve maybe
you could you could include in that
which um is there a difference between a
green field thing and a blue sky thing?
I'm not sure. Maybe they're maybe
they're pretty
similar. So uh some new area, let's call
it. And uh and uh and then that could
come back into the main branch, right?
And we've I've I mean as you both know
I've been fundamental believer in sort
of foundational research. We've always
had the broadest, deepest research
bench, I think, of any lab out there.
Um, and that's what allowed us to do
past big breakthroughs, obviously
transformers, but Alpha Go, Alpha Zero
all of these things, distillation. Um
and if to the extent any of those things
are needed again, another big
breakthrough of that level, um, I would
back us to do that. And uh we're you
know pursuing lots of very exciting
avenues that could bring that sort of
step change uh as well as the
incremental and then they of course also
interact um because the better you have
your your your base models the more you
things you can try on top of it um again
like alpha evolve you know add in
evolutionary programming in that case on
top of the the the LLMs.
Um we recently talked to Karen how who's
a journalist um just wrote a book about
AI um and she was making an argument
essentially against scale um that you
don't need these big general models that
are incredibly energyintensive and
compute intensive and require billions
of dollars and new data centers and and
all kinds of uh of resources to make
happen that instead of doing that kind
of thing you could build smaller models.
You could build narrower models. You
could have a model like Alphafold that
is just designed to uh predict the 3D
structures of proteins. You don't need a
huge behemoth of a model to accomplish
that. What's your response to that?
Well, I think you need those big models.
We we we're, you know, we love big and
small models. So, you need the big
models often to train the smaller
models. So, uh we're very proud of our
kind of flash models, which are the
most, you know, we call them our
workhorse models. Really efficient, some
of the most popular models. We use a ton
of those types of size models
internally. But you can't build those
kind of models without distilling um
from the larger teacher models and um
and even things like alpha which
obviously I I I'm huge advocate of more
of those types of models that can tackle
right now. We don't have to wait to AGI.
we can tackle now really important
problems in science and medicine uh
today and uh that will require taking
the general techniques but then
potentially specializing it you know in
that case around protein structure
prediction and I think there's huge
potential for doing more of those things
um and we are largely in our science
work AI for science work um and I think
you know we're producing something
pretty cool on that pretty much every
month these days and um I think there
should be a lot more exploration on that
probably a lot of startups could be
built uh combining some kind of general
model that exists today with some domain
specivity and um but if you're
interested in AGI you've got to push the
the again both sides of that it's it's
not an either or in my mind I'm I'm an
and right like let's scale let's let's
look at specialized techniques combining
that and hybrid systems sometimes
they're called and let's look at um new
uh uh blue sky research that could
deliver the next transformers Um, you
know, we're betting on all of those
things. You mentioned Alpha Evolve
something that Kevin and I were both
really fascinated by. Tell us what Alpha
Evolve is. Well, a high level, it's
basically taking our um latest Gemini
models, actually two different ones, uh
uh to generate sort of ideas, hypotheses
about programs and other uh mathematical
functions. And then it goes they go into
sort of evolutionary programming process
to decide which ones of those are most
promising and then that gets sort of
ported into the next step. And tell us a
little bit about what evolutionary
programming. It sounds very exciting.
Yeah. So it's it's basically a way for
uh systems to kind of uh uh explore new
space right. So like what things should
we you know in genetics like mutate to
uh to give you a kind of new organism.
So you can think about the same way in
programming or in mathematics. You know
you change the program in some way and
then uh you compare it to some answer
you're trying to get and then the ones
that fit best according to sort of
evaluation function you put back into
the next set of generating new ideas. uh
we have our most efficient model sort of
flash model generating uh possibilities
and then we have the pro model uh
critiquing that right and deciding which
one of those is most promising for the
to be selected for the next uh next
round of evolution. So it's sort of like
an autonomous AI research organization
almost where you have some AI coming up
with hypotheses, other AI testing them
and supervising them and the goal as I
understand it um is to have an AI that
can kind of improve itself or over time
or suggest improvements to existing
problems. Yes. So it's the beginning of
I think that's why people so excited
about and we're excited about is the
beginning of a kind of automated
process. It's still not fully automated
and also it's still relatively narrow.
We've applied it to many things like
chip design, uh scheduling, uh AI tasks
on our on our data centers more
efficiently, um even improving matrix
multiplication, one of the most
fundamental units of training uh uh
training algorithms. Uh so it's it's
actually amazingly useful already, but
um it's still constrained to domains
that are kind of provably correct
right, which obviously maths and coding
are, but we we need to sort of fully
generalize that. But it's interesting
because I think for a lot of people the
knock they have on LLMs in general is
well all you can really give me is the
statistical median of your training
data. But what you're saying is we now
have a way of going beyond that to
potentially generate novel ideas that
are actually useful in advancing the
state-of-the-art. That's right. And and
but we we already had these type this is
another approach alpha evolve using
evolutionary methods but but we already
had evidence of that even way back in
Alph Go days. So, you know, it's Alph Go
came up with new Go strategies. Most
famously, move 37 in game two of our big
Lisa doll world championship match. And
okay, it was limited to a game, but it
was a genuinely new strategy that had
never been seen before, even though
we've played Go for hundreds of years.
So, that's when I kicked off our sort of
alpha fold projects and science projects
because I was waiting for to see
evidence of that kind of spark of um
creativity, you could call it, right? or
originality at least in within the
domain of what we know. But there's
still a lot further that has to you know
so we we we know that these kinds of
models paired with things like Monte
Carlo research or reinforcement learning
planning techniques uh uh can get you to
new regions of space to explore and
evolutionary methods is another way of
going beyond what the current model
knows to explore force it into a new
regime where it's not seen it before.
I've been looking for a good Monte Carlo
tree for so long now. If you could help
me find one, it would honestly be a huge
help. One of these things could help.
Um, so I read the Alpha Evolve paper, or
to be more precise, I fed it into
Notebook LM and had it make a podcast
that I could then listen to that would
explain it to me at a slightly more
elementary level. Um, and one
fascinating thing that stuck out to me
um, is a detail about how you were able
to make Alpha Evolve more creative. And
one of the ways that you did it was by
essentially forcing the model to
hallucinate. M I mean so many people
right now are obsessed with eliminating
hallucinations but it seemed to me like
one way to read that paper is that
there's there's actually a scenario in
which you want models to hallucinate or
be creative whatever you want to call
it. Yes. Well I think that's right. I
think you you know hallucination in when
you want factual things obviously is you
don't want um but in creative situations
where you know you can think of it as a
little bit like lateral thinking in an
MBA course or something right uh is just
just create some crazy ideas most of
them don't make sense um but the odd one
or two may get you to a a region of the
search space that is actually quite
valuable it turns out once you evaluate
it afterwards uh and so um you can
substitute the word hallucination maybe
for imagination at that point, right?
They're obviously two sides of the of
the same coin. Yeah. I did talk to one
AI safety person who was uh a little bit
worried about Alpha Evolve, not not
because of the actual technology and the
experiments, which this person said, you
know, they're fascinating, but because
of the way it was rolled out. So, uh
Deep Google Deep Mind created Alpha
Evolve and then used it to optimize some
systems inside Google and kept it sort
of hidden for a number of months. um and
only then sort of released it to the
public. And this person was saying
well, if we really are getting to the
point where these AI systems are
starting to become recursively
self-improving and they can sort of
build a better AI, doesn't that imply
that when Google, if Google DeepMind
does build AGI or even super
intelligence that it's going to keep it
to itself for a while rather than doing
the responsible thing and informing the
public? Well, I think it's a bit of both
actually. You need to for first of all
Alphveolve is a very naent
self-improvement thing, right? And it's
still got human in the loop and it's um
and it's only shaving off, you know
albeit important percentage points off
of already existing tasks. You know
that's valuable, but it's not some it's
not creating any kind of step changes.
Uh and there's a there's a trade-off
between, you know, carefully evaluating
things internally before you release it
to the public out into the world. Um and
then also getting the extra critique
back which is also very useful from the
academic community and so on. And also
we we have a lot of trusted tester type
of programs that we talk about where
people get early access to these things
um and um and then give us feedback and
and stress test them uh including
sometimes the the safety institutes as
well. But my understanding was you
weren't just like red teaming this
internally within Google. we were
actually like using it to make the data
centers more efficient, using it to make
the kernels that train the AI models
more efficient. So I guess what this
person is saying is like it's just we we
want to start getting good habits around
these things now before they become
something like AGI and uh they were just
a little worried that maybe this is
going to be something that stays hidden
for longer than it needs to. So I don't
like you I would love to hear your
response to that. Yeah. Well, look, I I
mean I think that that system is not uh
uh anything really that I would say, you
know, has any risk on the AGI type of
front. I think as we get and I think
today's systems still are not although
very impressive are not that powerful um
from a you know any kind of AGI risk
standpoint that maybe this person was
talking about. Um and I think you need
to have both. You need to have
incredibly rigorous internal tests of
these things and then you need to also
get collaborative inputs from external.
So I think it's a bit of both. I
actually don't know the details of uh of
of the alpha uh uh process for the last
few you know the first few months. It
was just function search before and then
it become more general. So it's it's
sort of evolved it's evolved itself over
the last year in terms of becoming this
general purpose tool. um and it still
has a lot of um way to go before we can
actually use it in our main branch which
is at that point I think then becomes
more serious like with Gemini it's sort
of separate from from that currently.
Let's talk about AI safety a little bit
more broadly. It's been my observation
that it seemed like if the further back
in time you go and the less powerful AI
systems you have the more everyone seem
to talk about the safety risk and it
seems like now as the models improve we
we hear about it less and less including
you know at the keynote yesterday. So
I'm curious what you make of this moment
in AI safety. Uh, if you feel like
you're paying enough attention to the
risk that could be created by the
systems that you have and if you are as
committed to it as you were say 3 or
four years ago, a lot of these outcomes
seem less likely. Yeah. Well, we're
we're just as committed as we've ever
been. I mean, we we've we've from the
beginning of Deep Mind, we plan for
success. So, success meant something
looking like this. This is what we kind
of imagined. And I mean, it's sort of
unbelievable still that it's actually
happened, but it's it is sort of in the
in the Overton window of what we thought
was going to happen if if these
technologies really did develop the way
we thought they were going to. Um, and
the risk and attending to mitigating
those risks was was part of that. And so
we do a huge amount of work on our
systems. Uh, I think we have very robust
red teaming uh uh uh uh processes pro
both pre and post launches. Um, and
we've learned a lot. Uh and I think
that's what's the difference now between
having these systems have albeit early
systems contact with the real world. I
think that's actually been I'm sort of
persuaded now that that has been a
useful thing overall. And I wasn't sure
um I you know I think 5 years ago 10
years ago I may have thought maybe it's
better staying in a research lab and and
you know kind of collaborating with
academia and that but actually there's a
lot of things you don't get to see or
understand unless millions of people try
it. So, it's it's this weird trade-off
again between um you you can only do it
when there's there's millions of smart
people uh uh try your uh technology and
then you find all these edge cases. So
you know, however big your your testing
team is, it's only going to be, you
know, 100 people or thousand people or
something. So, it's not comparable to
tens of millions of people using your
your your systems. But on the other
hand, you want to know as much as
possible uh ahead of time so you can
mitigate the risks before they happen.
So, and this is so this is interesting
and it's good learning. I think what's
happened in the industry in the last two
three years has been great because we've
been learning when the systems are not
that powerful or risky as you were
saying earlier, right? I think things
are going to get very serious in 2 three
years time when these agent systems
start becoming really capable. We're
only seeing the beginnings of the agent
era, let's call it, but you can imagine
and I hopefully you understood from the
keynote what the ingredients are, what
it's going to come together with. And
then I think we really need a step
change in research on analysis and
understanding controllability. But the
other key thing is it's got to be
international. You know, that's pretty
difficult and I've been very consistent
on that because it's an inter it's it's
a technology going to fit everyone in
the world. It's been built by different
countries and different companies in
different countries. So you got to get
some international kind of norm I think
around uh uh what we want to use these
systems for and and what are the kinds
of benchmarks that we want to test
safety and reliability on. Um but
there's plenty of work to get on with
now like we don't have those benchmarks.
We should we and the industry and
academia should be agreeing to consensus
of what those are. What role do you want
to see export controls play in doing
what you just said? Well, export
controls is a very complicated issue and
and obviously geopolitics today is
extremely complicated. Um, and there
you know, I can I see both sides of the
arguments on that. You know, there's
proliferation uncontrolled
proliferation of these technologies. Uh
do you want different places to have
frontier modeling uh training
capability? Uh, I'm not sure that's a
good idea. But on the other hand, um
you want western technology to be to be
uh the thing that's adopted uh around
the world. So, it's a complicated
trade-off. Like, if there was an easy
answer, I think we'd all, you know, I
would be, you know, shouting it from the
rooftops. But I think there's it's it's
nuance like most real world problems
are. Do you think we're heading into a
bipolar conflict with China over AI if
we aren't in one already? I just
recently saw the Trump administration
making a big push to uh make the Middle
East uh countries in the Gulf like Saudi
Arabia and the UAE into AI powerhouses.
have them, you know, use American chips
to to train models that will not be sort
of accessible to to China and its AI
powers. Do you see that becoming sort of
the foundations of a new global
conflict? Well, I hope not, but I I
think uh short term, you know, I feel
like AI is getting caught up in the in
the bigger g geopolitical shifts that
are going on. So, I think it's just part
of that and it happens to be one of the
most uh topical new things that's
appearing. But on the other hand, what
I'm hoping is as people as these
technologies get more and more powerful
the world will realize we're all in this
together because we are. And so, uh, you
know, and the the the the last few steps
towards AGI, um, hopefully we're on the
longer timelines actually, right? Um
the more the timelines I'm thinking
about, then we get time to sort of get
the the the the collaboration we need
at least on a scientific level, um
before before then would be good. Do you
feel like you're in sort of the the
final home stretch to AGI? I mean
Sergey Brin, uh, Google's co-founder
had a a memo that was reported on by my
my colleague at the New York Times
earlier this year that went out to
Google employees and said, you know
we're in the sort of the home stretch
and everyone needs to get back to the
office and be working all the time uh
because this this is when it really
matters. Do you have that sense of like
of of finality or or sort of entering a
new phase or an end game? I think we are
past the middle game, that's for sure.
But I've been working every hour there
is for the last 20 years because I felt
the how important and momentous this
this technology would be and we've
thought it was possible for 20 years and
I think it's coming into view now. I
agree with that and um whether it's 5
years or 10 years or 2 years that
they're all actually quite short
timelines when you're talking discussing
what the the enormity of the
transformation of this techn you know
this technology is going to bring uh
that none of those timelines are very
long. When we come back, more from
Dennis Assabus about the strange futures
that lie ahead.
[Music]
We're going to switch to some more
general questions about the AI future.
Sure. A lot of people now are starting
to, at least in conversations that I'm
involved in, think about what the world
might look like after AGI. Um, the
context in which I actually hear the
most about this is from parents who want
to know um what their kids should be
doing, studying, will they go to
college? Um, you have kids that are
older than than my kid. Um, how are you
thinking about that? So I think that the
when it comes to kids and I get asked
this quite a lot is is u university
students um I think first of all I
wouldn't dramatically uh change some of
the basic advice on STEM uh getting good
at even for things like coding I would
still recommend because I think whatever
happens with these AI tools you'll be
better off understanding how they work
and how they function and you know what
you can do with them. Um, I would also
say immerse yourself now. That's what I
would be doing as a teenager today in in
trying to become a sort of ninja at
using the the the latest tools. I think
you can almost be sort of superhuman in
some ways if you got really good at
using uh all the latest uh coolest AI
tools. Um, but don't neglect the basics
too because you need the fundamentals.
and then I think uh teach sort of meta
skills really of um like learning to
learn and the only thing we know for
sure is there's going to be a lot of
change over the next 10 years right so
how does one get ready for that what
kind of skills are useful for that
creativity skills um
adaptability resilience I think all of
these sort of you know meta skills is
what will be important uh for the next
generation um and I think it'll be very
interesting to see what they do because
they're going to grow up AI native just
like the last generation
grew up mobile and and iPad and you know
sort of that that kind of you know
tablet native and then previously
internet and computers which was my era
and um you know they always I think the
kids of that era always seem to adapt to
uh make use of the latest coolest tools
and I think there's more we can do on
the AI side to make the tools actually
um if people are going to use them for
school and education let's make them
really good for that and sort of
provably good and I'm very excited about
bringing it to education in a big way
and also to you know if you had an AI
tutor uh to bring it to poor parts of
the world that don't have good
educational systems. Um so I think
there's a lot of upside there too.
Another thing that kids are doing with
AI is chatting a lot with digital
companions. Um Google DeepMind doesn't
make any of these companions yet. Um
some of what I've seen so far seems
pretty worrying. It seems pretty easy to
create a chatbot that just does nothing
but tell you how wonderful you are and
that can sort of like lead into some
dark and weird places. So, I'm curious
what observations you've had as you like
look at this uh market for AI companions
and whether you think I I might want to
build this someday or I'm going to leave
that to other people. Yeah, I think we
got to be very careful as we as we start
entering that domain and and that's why
we we haven't yet and we're being very
thoughtful about that. My my view on
this is um more through the lens of uh
the universal assistant that we talked
about yesterday, which is something
that's incredibly useful for your
everyday productivity. You know, gets
rid of the boring, mundane tasks that we
all hate doing to give you more time to
do the things that you love doing. I
also really um hope that they're going
to enrich your lives by giving you
incredible recommendations, for example
on all sorts of amazing things that um
you didn't realize you would enjoy. you
know, sort of the delight you with
surprising things. Um, so I think these
are the the ways I'm hoping that uh
these systems will go and actually on
the positive side, I feel like um we if
this assistant becomes really useful and
knows you well, you could sort of
program it with you obviously with
natural language to protect your
attention. So you could almost think of
it as a system that works for you, you
know, as an individual. it's yours and
um it protects your attention from being
assaulted by other algorithms that want
your attention which is actually nothing
to do with AI. Most most social media
sites that's what they're doing
effectively. Their algorithms are trying
to gain your attention and I think
that's actually the worst thing and it
be great to to protect that so we can be
more in you know creative flow or
whatever it is that you want you want to
do. That's how I would want these
systems to be useful to people. If if
you could build a system like that, I
think people would be so incredibly
happy. I think right now people feel
assailed by the algorithms in their life
and they don't know what to do about it.
Well, the reason is is because you have
to use your you've got one brain and you
have to let's say whatever it is a
social media stream, you have to dip
into that torrent to then get the piece
of information you want. But then you've
already but you're doing it with the
same brain. So you've already affected
your mind and your mood and other things
by dipping into that torrent and you
know to find the valuable you know the
piece of information that you wanted.
But if if an assistant dig digital
assistant did that for you, you would
you know you'd only get the useful
nugget and you wouldn't need to um break
your you know your your mood or what it
is that you're doing the day or your
concentration with your family, whatever
it is. Um I think that would be
wonderful. Yeah, Casey loves that idea.
You love that idea. I love this idea of
an AI agent that protects your attention
from all the forces trying to assault
it. I'm not sure how the the ads team at
Google is going to feel about this. Um
but we can ask them when the show comes.
Um, some people are starting to look at
the job market, especially for recent
college graduates, and uh worry that
there we're already starting to see
signs of AI power job loss. Um
anecdotally, I talked to young people
who, uh, you know, a couple years ago
might have been interested in going into
fields like tech or consulting or
finance or law who are just saying like
I don't know that these jobs are going
to be around much longer. Um, a recent
article in the Atlantic wondered if
we're starting to see AI competing with
college graduates for these entry- level
positions. Do you have a view on that? I
haven't looked at that. You know, I
don't know. I haven't seen the studies
on that, but um, you know, maybe it's
starting to appear now. I I don't think
there's any hard numbers on that yet. At
least I haven't seen it. Um I think for
now I mostly see these as tools that
augmenting what you can do and what you
can achieve. Um I think like with most I
think the next era I mean maybe after
AGI things will be different again but
over the next five to 10 years I think
we're going to find uh what normally
happens with with big sort of new
technology shifts which is that some
jobs get disrupted but then new um you
know more valuable usually more
interesting jobs get created. So I do
think that's what's going to happen in
the in the nearer term. Um so you know
today's graduates and the next you know
next five years let's say I think it's
very difficult to predict after that um
that's part of this sort of more
societal change that we need to get
ready for. I mean I think the the
tension there is that you're right these
tools do give people so much more
leverage. Um but they also like reduce
the need for big teams of people doing
certain things. I was talking to someone
recently who said, you know, they had
been at a data science uh company in
their previous job that had 75 people
working on some kind of data science
tasks and now they're at a startup that
has one person doing the work that used
to require 75 people. And so I guess the
question I'd be curious to get your view
on is what are the other 74 people
supposed to do? Well, look, I think um
uh these tools are going to unlock uh uh
the ability to create things much more
quickly. So, you know, I think there'll
be more people that will do startup
things. I mean, there's a lot more
surface area one could attack and try
with these tools um that was possible
before. So, let's take programming for
example. um you know so obviously these
these systems are getting better at
coding but the best coders I think are
getting differential value out of it
because they still understand how to
pose the question and architect the
whole codebase and and check what the
coding does but simultaneously at the
hobbyist end it's allowing designers and
maybe nontechnical people to vibe code
some things you know whether that's
prototyping games or or websites or uh
movie ideas so in in theory it should be
those 70 people or whatever should could
be creating new startup ideas. Maybe
it's going to be less of these bigger
teams and more smaller teams or very
empower empowered by AI tools. Um but it
but that goes back to the education
thing then which skills are now
important. It might be different skills
like creativity sort of vision and uh
design sensibility um you know could
become increasingly important. Do you
think you'll hire as many engineers next
year as you hire this year? I think so.
Yeah, that's that's the I mean there's
no plan to to hire less, but you know
we again you have we have to see how
fast the the coding uh agents improve um
today. They they're not, you know, they
can't do things on their own. They need
to they need uh they're just helpful for
for the best, you know, for the best
human coders. Last time we talked to
you, we asked you about some of the more
pessimistic views about AI in the
public. And one of the things you said
to us was that the field needed to
demonstrate concrete use cases that were
just clearly beneficial to people to
kind of shift this. My observation is
that I think there are even more people
now who are like actively antagonistic
toward AI. And I think maybe one reason
is they hear folks at the big labs
saying pretty loudly eventually this is
going to replace your job. And most
people just think well I don't want that
you know. So I'm curious like looking on
from that past conversation if you feel
like we have seen some use cases enough
use cases to start to shift public
opinion or if not what some of those
things might be that actually change
views here. Well I think we're we're
we're working on those things. They take
time to develop. Um I think the a kind
of universal assistant would be one of
those things if it was uh kind of really
yours and working for you effectively.
So technology that works for you. Um, I
think that this is what economists and
other experts should be working on is do
you have uh does everyone have manage
a a suite of of you know fleet of agents
that are doing things for you and you
know including potentially earning you
money or building you things? Um, you
know, does that become part of the
normal job process? I could imagine that
in the next four or five years. I also
think that as we get closer to AGI and
we make breakthroughs in we probably
talked about last time material
sciences, energy fusion, these sorts of
things helped by AI um uh we should have
we should start getting to a position in
society where we're getting towards what
I would call radical abundance where
there's a lot of resources u to go
around and then again it's more of a
political question of how would you
distribute that in a fair way right so
I've heard this term like universal high
income something like that uh I think is
going to probably be you know good and
necessary but obviously there's a lot of
uh complications there that need to be
thought through. Um so and and then in
between there's this transition period
you know between now and whenever we we
have that sort of situation where what
what do we do about the change in in the
in the interim and depends on how long
that is too. What part of the economy do
you think AGI will transform last? Well
I mean I think the parts of the economy
where you know involves humanto human
interaction and emotion um and those
things I think uh you know will probably
be the hardest things for for AI to do.
So um you know aren't already aren't
people already doing AI therapy and
talking with chat bots for things that
they might have paid someone you know
$100 an hour for? Well therapy is a very
narrow domain and I'm not sure exactly
there's a lot of you know hype about
those things. I'm not actually sure how
many uh of those things are really going
on in terms of actually affecting the
real economy and rather than just sort
of more toy things. Um and I don't think
the AI systems are like capable of doing
that properly yet. Um but just the kind
of emotional connection uh and uh that
we get from talking to each other and um
doing things in nature in the real
world. Uh I don't think that AI can
really replicate all of those things. So
if you lead hikes that' be a good job.
Yeah. Yeah, climb Everest. My intuition
on this is that it's going to be some
heavily regulated industry where there
will just be like a a massive push back
on the use of AI to displace labor or or
take people's jobs like healthcare or or
education or something like that. Um
but you think it's going to be an easier
lift in those heavily regulated
industries? I don't know. I mean, it
might be, but then we have to weigh that
up as society whether we want all the
all the all the the positives of that.
for example, you know, curing all
diseases or or um you know, I think
there's a lot of uh finding new energy
sources. So, I think these things would
be clearly very beneficial for society
and I think we need um to for our other
big challenges. It's not like there's no
challenges in society other than uh AI.
But I think AI can be a solution to a
lot of those uh other challenges be that
energy, resource constraints, uh aging
disease, um you know, you name it and
water access etc. a ton of problems
facing us today. Climate um I think AI
can potentially help with all of those.
And I agree with you, society will need
to decide what um it wants to use this
these technologies for. And um but then
you know what's also changing is what we
discussed earlier with products is the
technology is going to continue
advancing um and that will open up new
possibilities like uh kind of radical
abundance space travel these things um
which are a little bit out of scope
today unless you read a lot of sci-fi
but I think rapidly becoming uh real.
During the industrial revolution, there
were lots of people who embraced new
technologies, moved from farms to cities
to work in the new factories. Uh were
sort of early adopters on that curve. Um
but that was also when the
transcendentalists started retreating
into nature and rejecting technology.
That's when thorough went to Walden Pond
and there was a big movement of
Americans who just saw the new
technology and said, "I don't think so.
Not for me." Do you think there will be
a similar movement around rejection of
AI? And if so, how how big do you think
it'll be? Um, I don't know if it'll be I
mean there could be a get back to nature
and I mean I think a lot of people will
want to do that and and I think this
potentially will give them the room and
space to do it right. And if you're in a
world of radical abundance, I fully
expect that's what what a lot of us will
want to do is use it to, you know, I
think again I'm thinking about it sort
of space fairing and and and and more
you know, kind of um maximum human
flourishing, but uh I think there will
be that will be exactly some of the
things that a lot of us will choose to
do and but have time and the space and
the the resources to do it. Are there
parts of your life where you say, "I'm
not going to use AI for that," even
though it might be pretty good at it for
some sort of reason, wanting to protect
your creativity or your thought process
or something else. Um, I don't think AI
is good enough yet to have impinged on
any of those sorts of areas where I
would, you know, it's mostly I'm using
it for, you know, things like you did
with Notebook LM, which I feel find
great, like breaking the ice on a new
topic, scientific topic, and then
deciding if I want to get more deep into
it. That's one of my main use cases.
summarization, those things. I think
those are all just helpful. Um, but you
know, we'll see. I haven't got any
examples of what of what you suggested
yet, but maybe as AI gets more powerful
there will be. When we talked to Dario
Amade of Anthropic uh recently, he
talked about this feeling of excitement
mixed with a kind of melancholy about
the progress that AI was making in
domains where he had spent a lot of try
time trying to be very good like coding.
Yes. um where it was like you see a new
coding system that comes out, it's
better than you, you think that's
amazing, and then your second thought is
like, "Ooh, that stings a little bit."
Have you had any experiences like that?
So maybe maybe one reason it doesn't
sting me so much is I've had that
experience when I was very young with
chess. So you know um chess was going to
be my first career and you know I was
playing pretty professionally when I was
a kid for the England junior teams and
then Deep Blue came along, right? And
clearly uh the computers were going to
be much more powerful than the world
champion forever after that. And so, but
yet I still enjoy playing chess. Um
people still do. It's different, you
know, but it's a bit like I can, you
know, Usain Bolt, we celebrate him for
for running the 100 meters incredibly
fast, but we've got cars, but we don't
care about that, right? Like it's we
we're interested in other humans doing
it. And um I think that'll be the same
with robotic football and all of these
other things. So, um and that maybe goes
back to what we discussed earlier about
what I think in the end we're interested
in in other human beings. That's why
even like a novel, maybe it maybe AI
could write one day a novel that's sort
of technically good, but I don't think
it would have the same soul or
connection to the reader that um uh if
you knew it was written by an AI, at
least as far as I can see for now. You
mentioned robotic football. Is that a
real thing? We're not sports fans, so I
just want to make sure I haven't missed
something. I was meaning soccer. Yeah.
No. Yeah. No, no. Uh I don't know. I I I
there are there are Robocup uh sort of
soccer type little robots trying to kick
balls and things. Uh I'm not sure how
serious it is, but there is a there is a
field of of robotic football. You you
mentioned the you know sometimes a novel
written by a robot might not feel like
it have a soul. I have to say for as
incredible as the technology is in VO or
Imagine. I sort of feel that way with it
where it's like it's beautiful to look
at but I don't know what to do with it.
You know what I mean? Exactly. And
that's that's what I was you know that's
why we work with great artists like
Darren Aronowski and Shanka on the
music. Um is I I totally agree. I think
these are tools and they can come up
with technically good things and I mean
V3 is unbelievable like when I look at
the you know I don't know if you've seen
some of the things that are going viral
being posted at the moment with the
voices actually I didn't realize how big
a difference audio is going to make to
the video I think it just really brings
it to life but it's still not as Darren
would say yesterday when we were
discussing on an interview it it it
doesn't he he brings the storytelling
it's not got deep storytelling like a
master filmmaker would do or a master
novelist you know, the top of their
game. And um it might never do, right?
It's just always going to feel
something's missing. It's a sort of a
soul for a better word of the piece, you
know, the real humanity, the magic, if
you like the the great pieces of art
you know, art too. when you when I see a
Van Go or a Rothco or you know why does
that touch your you know I spill you
know um sort of you know hair's gone the
back of my my my spine because of I
remember you know and you know about
what what they went through and um the
struggle to produce that right in every
brushstroke of Van Go's brush strokes
his his sort of torture and I'm not sure
what that would mean even if the AI
mimicked that and you were told that it
was like so what right and and and so I
think that is the piece that at least as
far as I can see out to 5 10 years um
the the top human creators will always
be bringing and that's why we've done
all of our tools VO lia in in com in
collaboration um with top creative
artists the new pope pope Leo is
reportedly interested in AGI I don't
know if he's AGI pilled or not but uh
that's something that he's spoken about
before um do you think we will have a
religious revival or a renaissance of
interest in faith and spirituality in a
world where AGI is forcing us to think
about what gives our lives meaning. I
think that potentially could be the case
and um I actually did speak to the last
pope about that and and the Vatican's
been interested even prior to this pope
haven't spoken to him yet but on these
these matters how does AI and religion
uh and uh technology in general and
religion uh interact and and what's
interesting about the Catholic churches
and I'm a member of the Pontipical
Academy of Sciences is they've always
had uh which is strange for a religious
body a scientific arm you know which
they like to always say Galileo was the
founder
and and uh those
interest so so but then and and it's
actually really separate and I always
thought that was quite interesting and
people like Steven Hawking and and you
know avowed atheists were part of the
academy and and that's partly why I
agreed to join it is because it's a
fully scientific body and it's very
interesting and I was fascinated they've
been interested in this for 10 plus
years so they they were on on this early
in terms of like how interesting or how
phil from a phil philosophical point I
think um uh this this this technology
will be and I and I I actually think we
need more of that type of thinking and
work from from philosophers and
theologians uh actually would be really
really good. So I hope the new pope is
genuinely interested. Um we'll close on
a question that uh I recently heard
Tyler Cowan ask Jack Clark from
Anthropic that I thought was so good and
decided to just steal it whole cloth. In
the ongoing AI revolution, what is the
worst age to be? Oh wow.
Uh well I I don't I mean you know
um gosh I haven't thought about that but
I mean I think any age uh uh where you
can live to see it is a good age because
I think we are going to make some great
strides uh with things like you know
medicine and so um I think it's going to
be incredible journey. I don't none of
us know you know exactly how it's going
to transpire. It's very difficult to
say, but it's going to be very
interesting to find out. Try to be young
if you can. Yes, young is always better.
I mean, in general, young is always
better. All right, Deis, thanks so much
for coming. Thank you very much.
[Music]
Loading video analysis...