LongCut logo

Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

By All-In Podcast

Summary

## Key takeaways - **Nobel Prize for AI: A Scientist's Surreal Moment**: Receiving the Nobel Prize was a surreal experience, culminating in signing the Nobel book alongside historical figures like Einstein and Marie Curie, a moment scientists dream of. [01:09], [01:45] - **Google DeepMind: The AI Engine of Alphabet**: Google DeepMind, formed by merging AI efforts across Google and Alphabet, now serves as the engine room, integrating its advanced models like Gemini into nearly every Google product and surface area. [02:46], [03:18] - **Genie 3: AI Generating Interactive Worlds**: Genie 3 is a groundbreaking world model that generates interactive 3D environments from text prompts, allowing users to explore and influence these worlds using intuitive controls, with all visuals rendered on the fly. [04:30], [05:03] - **AGI Lacks True Creativity and Consistency**: Current AI, while capable of complex tasks, lacks true creativity and consistency, unable to generate novel hypotheses or consistently perform at a PhD level, indicating AGI is still years away. [16:13], [19:00] - **Democratizing Creativity with AI Tools**: AI tools like Nano Banana are democratizing creativity by allowing anyone to generate and edit images through simple instructions, bypassing the need for complex software skills and significantly boosting productivity for professionals. [21:04], [22:41] - **AI to Usher in a Golden Age of Science**: The development of AGI within the next 10 years is expected to usher in a new golden era of science and a renaissance, bringing benefits across fields like energy and human health. [31:14], [31:24]

Topics Covered

  • Generative AI is reverse-engineering the physics of our world.
  • Robotics needs an 'Android OS' for mass proliferation.
  • AI will usher in a new golden era of science.
  • What's missing for true AGI? Creativity and consistency.
  • AI's net positive impact on energy and climate change.

Full Transcript

A genius who may hold the cards of our

future.

CEO of Google DeepMind, which is the

engine of the company's artificial

intelligence.

After his Nobel and a nighthood from

King Charles, he became a pioneer of

artificial intelligence.

We were the first ones to start doing it

seriously in the modern era. Alph Go was

the big watershed moment, I think, not

just for Deep Mind at my company, but

for AI in general. This was always my

aim with AI from a kid, which is to to

use it to accelerate scientific

discovery.

Ladies and gentlemen, please welcome

Google Deep Minds Deis Hassabis.

Welcome.

Great to be here.

Thanks. Thanks for following Tucker,

Mark Cuban at all. Um, first off,

congrats on winning the Nobel Prize.

Thank you. Um, thanks

for the incredible breakthrough of

AlphaFold. Maybe you may have done this

before, but I know everyone here would

love to hear your recounting of how you

where you were when you won the Nobel

Prize. How'd you find out?

Well, it was very surreal moment

obviously. Um, you know, it's that

everything about it is surreal. The way

they tell you, they tell you like 10

minutes before it all goes live. It's

just, you know, you can't really it's

you're sort of shell shocked when you

get that call from Sweden. It's the call

that every scientist dreams about. And

um and then the seal ceremonies the

whole week in Sweden with the royal

family. It's amazing. Obviously, it's

been going for 120 years. Uh and the

most amazing bit is they bring out the

this Nobel book from the from the vaults

in the safe and you get to sign your

name next to you know all the other

greats. So it's quite an incredible

moment sort of leafing back to the other

pages and seeing Fineman and Mary Cury

and Einstein and Neils Bore and you just

carry on going backwards and you get to

put your name on that in that book. It's

incredible.

Did you have an inkling you had been

nominated and that this might be coming

your way?

Well, you you you get you hear rumors.

It's amazingly locked down actually in

today's age how they keep it so so

quiet, but um it's sort of like a a

national treasure for Sweden. And um and

so you hear, you know, maybe Alpha Fold

is the kind of thing that that would be

uh worthy of that recognition. And it

has they look for impact as well as the

scientific breakthrough uh impact in the

real world. And that can take 20 30

years to to arrive. So you just never

know, you know, whether how soon it's

going to be and and and whether it's

going to be at all. So it's a surprise.

Well congrats.

Yeah. Thank you.

Um and thank you. You let me take a

picture with it a few weeks ago when we

So that's something I'll cherish. Um

what is Deep Mind within Alphabet?

Alphabet is a sprawling organization,

sprawling business units. What is Deep

Mind? What are you responsible for?

Well, we sort of see DeepMind now and

Google Deep Mind as it's become. We sort

of merged a couple of years back all of

the different AI efforts across Google

and Alphabet including Deep Mind. Put it

all together, the kind of bringing the

the strengths of all the different

groups together into one division. And

um really the way I describe it now is

that we're the engine room of the whole

of Google and the whole of Alphabet. So

Gemini, our main model that we're

building, but also many of the other

models that we also build, the the video

models and interactive world models, uh

we plug them in all across Google now.

So pretty much every product, every

surface area has um uh one of our ai

models in it. So you know, billions of

people now interact with Gemini models,

whether that's through AI overview, AI

mode, or the Gemini app. Uh and that's

just the beginning. You know, we're kind

of incorporating into workspace into

Gmail and so on. So it's a fantastic

opportunity really for us to do cutting

edge research, but then immediately ship

it to billions of users.

And uh how many people what's the

profile? Are these scientists,

engineers? What's the makeup of your

There's around 5,000 people in in in my

or in Google Deep Mind and and you know,

it's predominantly I guess 80% plus

engineers and PhD researchers. So, uh

yeah, about you know, three three or 4

thousand.

So, there's an evolution of models, a

lot of new models coming out and also

new classes of models. Um the other day

you released this Genie World model.

Yes.

So, what is the Genie World model and um

I think we got a video of it. Is it

worth looking at and we can talk about

it live?

Yeah, we can watch. Sure.

Because I think you have to see it to

understand it because it's so

extraordinary. Um, can we pull up the

video and then uh Demis can narrate a

little bit about what we're looking at.

What you're seeing are not games or

videos, they're worlds.

Each one of these is an interactive

environment generated by Genie 3, a new

frontier for world models.

With Genie3, you can use natural

language to generate a variety of worlds

and explore them interactively.

All with a single text prompt.

Yeah. So all of these videos, all these

interactive worlds that you're seeing,

so you're seeing someone actually can

control the video. It's not a static

video. It's just being generated by a

text prompt. And then people are able to

control the 3D environment using the

arrow keys and the spacebar. So

everything you're seeing here is being

fully all these pixels are being

generated on the fly. They don't exist

until the player or the the the person

interacting with it goes to that part of

the world. So, um, all of this richness,

um, and then you'll see in a second. So,

this is fully generated. This is not a

real video. This is generated someone

painting their room and they're painting

some stuff on the wall. And then the the

player is going to look to the right.

Uh, and then look back.

So, now this part of the world didn't

exist before, so now it exists. And then

they look back and they see the same

painting marks they they left just

earlier. And again, this is fully every

pixel you can see is fully generated.

And then you can type things like person

in a chicken suit or a jet ski and it

will just uh in real time uh include

them in the scene.

So um I think

you know it's quite mind-blowing really.

I think what's hard to gro when looking

at this because we've all played video

games that have a 3D element to them

when you're in an immersive world,

but there's no objects that have been

created. There's no rendering engine.

You're not using Unity or Unreal which

are the 3D rendering engines.

Yeah,

this is actually just 2D images that are

being rendered like created on the fly

by the AI.

This model is reverse engineering

intuitive physics. So, you know, it's

watched many millions of videos and

YouTube videos and other things about

the world. And just from that, it's kind

of reverse engineered how a lot of the

world works. It's not perfect yet, but

it can generate um a consistent minute

or two of um interaction as you as the

user uh in many many different worlds.

There there's some videos later on where

you can control, you know, a dog on a

beach or a jellyfish or that's not

limited to just human things

cuz the way a 3D rendering engine works

is you type in the programmer programs

all the laws of physics. How does light

reflect off of an object? You create a

3D object, light reflects off, and then

so what I see visually is rendered by

the software because it's got all the

programming on how to create physics,

how to do physics.

But this this was just trained off of

video and it figured it all out.

Yeah, it was trained off of video and

some synthetic data from from game

engines and it's just reverse engineered

it. For me, it's it's it's very close to

my heart, this project, but it's also

quite mind-blowing because in the 90s,

in my early career, I used to write uh

video games and AI for video games and

graphics engines. And I remember how

hard it was to do this by hand, program

all the polygons and the physics

engines. Um, and it's amazing to just

see this, do it effortlessly, all of the

reflections on the water and the the way

materials flow, um, and and and objects

behave. And it's just doing that all out

of the box. I think it's hard to

describe like how much complexity was

solved for with that model. Uh it's it's

it's really really really mind-blowing.

Where does this lead us? So fast forward

this model to gen five.

Yeah. So so the reason we're building

these kind of models is um we feel and

we've always felt obviously progressing

on the normal language models like with

our Gemini model but from the beginning

with Gemini we wanted it to be

multimodal. So we wanted it to input any

take any kind of input images audio

video and it can output anything and uh

and and so we've been very interested in

this because you for an AI to be truly

general to build AGI we feel that the

AGI system needs to understand the world

around us and the physical world around

us not just the abstract world of

languages or mathematics and of course

that's what's critical for robotics to

work. It's probably what's missing from

it today. And also things like smart

glasses, a smart glass assistant that

helps you in your everyday life. It's

got to understand the physical context

that you're in and and how the world the

intuitive physics of the world works. So

we think that building these types of

models uh these genie models and also VO

our the best text to video models um

those are expressions of us uh building

world models that understand the

dynamics of the world the physics of the

world. If you can generate it then um

that's that's an expression of your

system understanding uh those dynamics

and that leads to a world of robotics

ultimately um one one one aspect one

application but maybe we can talk about

that what is the state-of-the-art with

the vision language action models today

so a generalized system a box a machine

that can observe the world with a camera

and then I can use language I can use

text or speech to tell I want you to do

it. And then it knows how to act

physically to do something in the

physical world for me.

That's right. So, if you if you look at

our uh Gemini Gemini live version of of

Gemini where you can hold up your phone

to the world around you, uh I'd

recommend any of you try it. It's kind

of magical what it already understands

about the physical world. Um you can

think of the next step as as

incorporating that in some sort of more

handy device like glasses. Um and then

it will be an everyday assistant. and

it'll be able to recommend things to you

uh as you're walking the streets or we

can embed it into Google Maps. Um and

then with robotics uh we've we've built

a something called Gemini robotics

models which are sort of fine-tuned

Gemini with extra robotics data. And

what's really cool about that is and and

we released some demos of this over the

summer was um you can have you know

we've got these tabletop setups of two

hands uh interacting with objects on a

table, two robotic hands and you can

just talk to the robot. So you can say

you know put the the yellow object into

the red bucket or whatever it is and it

will just it will it will interpret that

instruction that language instruction

into motor movements and that's the

power of a multimodal model rather than

just a robotic specific model is that it

will be able to bring in real world

understanding to the way you interact

with it. So in the end it will be the UI

UX that you you need for as well as the

understanding the robotic the robots

need to to navigate the world safely. I

asked Sundar this, does that mean that

ultimately you could build what would be

the equivalent of call it either a Unix

like an operating system layer or like

an Android for generalized robotics at

which point if it works well enough

across enough devices, there will be a

proliferation of robotics uh devices and

and companies and products that will

suddenly take off in the world because

this software exists to do this

generally.

Exactly. That's certainly one strategy

we're pursuing is a is a kind of Android

play if you like a cross as a kind of

robotics almost an OS layer cross

robotics. Um but there's also some quite

interesting things about vertically

integrating our latest models uh with

specific robot uh uh types and robot

designs and some kind of endto-end

learning of that too. So both are

actually pretty interesting and we're

pursuing uh both strategies.

Do you think that there's humanoid

robots as a good kind of um form factor?

Is that does that make sense in the

world? Because some folks have

criticized it as being good for humans

cuz we're meant to do lots of different

things, but if we want to solve a

problem, there may be a different form

factor to fold laundry or do dishes or

clean the house or whatever.

Yeah, I think I think there's going to

be a place for both. So, so actually I

used to be of the opinion maybe five 5

10 years ago that we'll have form

specific robots for certain tasks and I

think in industry industrial robots will

definitely be like that where you can

optimize the robot for the specific task

whether it's a laboratory or a

production line you'd want quite

different types of robots. Uh on the

other hand, for um uh uh general use or

personal use robotics um and just

interacting with the the ordinary world,

uh the humanoid form factor could be

pretty important because of course we've

designed the physical world around us uh

to be for for humans. And so steps,

doorways, all the things that we've

designed for ourselves, um rather than

changing all of those in the real world,

it might be easier to design the form

factor to work seamlessly uh with the

way we've already designed the world. So

I think there's an argument to be made

that the humanoid form factor could be

very important for for those types of

tasks. Um but I think there is a place

also for specialized robotic forms.

Do you have a view on hundreds of

millions, millions, thousands over the

next 5 years, seven years? I mean, do

you have a like in your head, do you

have a vision on

Yeah, I I do and I spend quite a lot of

time on this and I think we're we're

we're still I I feel we're still a

little bit early on robotics. I think in

the next couple of years there'll be a

sort of real wow moment with robotics,

but um I think the algorithms need a bit

more development. Uh the general purpose

uh uh uh models that these these these

robotics models are built on still need

to be better and more reliable uh and

and better understanding the world

around it. Um and I think that will come

in the next couple of years. And then

also on the on the hardware side, the

key is I think eventually we will have

millions of robots uh helping helping

helping society and and increasing

productivity. But the key there is when

you talk to hardware experts is at what

point uh do you have the right level of

hardware to go for the scaling uh option

because effectively when you start

building factories around trying to make

tens of thousands hundreds of thousands

of particular robot type um you know

it's harder for you to update quickly

iterate the the robot design. So, it's

one of those kind of questions where if

you call it too early, uh, then then

then the next generation of robot might

be invented in 6 months time that's just

more reliable and better and more

dextrous.

Sounds like using a computing analogy,

we're kind of in the 70s era PC DOSs

kind of uh,

yeah, potentially. But of course, I

think the the the the maybe that's where

we are, but I think the except that 10

years happens in one year probably. So,

right

one of those years, right? Exactly.

Yeah. Um so let's talk about other

applications um particularly in in

science. Uh true to your heart as as as

a

as a scientist as the Nobel Prize

winning scientist um I always felt like

the greatest thing things that we would

be able to do with AI would be the

problems that are intractable to humans

with our current technology and

capabilities and our brains and whatnot

and we can unlock all of this potential.

what are the areas of science and

breakthroughs in science that you're

most excited about and what kinds of

models do we use to get there? Yeah, I

mean a AI to accelerate scientific

discovery and and help with things like

human health is the reason I spent my

whole career on AI and I think um it's

the most important thing we can do with

AI and I feel like if we build AGI in

the right way it will be the ultimate

tool for science and I think we've been

showing at Deep Mind a lot of the way of

that obviously Alphafold uh most

famously but actually we've we've um

applied uh our AI systems to many

branches of science whether it's

material design um helping with

controlling plasma and fusion reactors,

predicting the weather, um solving, you

know, mass olympiad uh uh uh math

problems and um the same types of

systems uh with some uh extra finetuning

can basically uh solve a lot of these

complex problems. So I think we're just

scratching the surface of what AI will

be able to do and there are some things

that are missing. So uh AI today I would

say doesn't have true creativity in the

sense that it can't come up with a new

conjecture yet or new hypothesis. It can

maybe prove something uh that you give

it uh but it's not able to come up with

a sort of new idea or new theory itself.

So I think that would be one of the

tests actually for

AGI. What is that creativity as a human?

Yeah.

What is creativity?

I think it's this sort of intuitive

leaps that we often celebrate with the

best scientists in history and and and

artists of course. Um and you know maybe

it's done through analogy or analogical

reasoning. There are many theories in

psychology and neuroscience and as to

how uh we as human scientists do it. But

a good test for it would be something

like um give one of these modern AI

systems a knowledge cutoff of 1901 and

see if it can come up with special

relativity like Einstein did in 1905.

Right? If it's able to do that then I

think we're on to something really uh

really important where perhaps we're

nearing an AGI. Another example would be

with our Alpha Go program that beat the

world champion at Go. Um, not only did

it win in, you know, back 10 years ago,

it it invented new strategies that had

never been seen before uh for the game

of Go, this famously move 37 in game two

that is now studied. But can an AI

system come up with a game as elegant,

as satisfying, as aesthetically

beautiful as go, not just a new

strategy? And the answer to those things

at the moment is no. So that's one of

the things I think that's missing uh

from uh a true general system an AGI

system is it should be able to do uh

those kinds of things as well.

Can you break down what's missing and

maybe related to the point of view

shared by Daario Sam others about AGI is

a few years away. Do you not subscribe

to that belief and maybe help us

understand what is it

in your understanding of structure in

your understanding of the system

architecture what what's lacking

well so I think the fundamental aspect

of this is um can we mimic these

intuitive leaps rather than incremental

uh advances that that the best human

scientists seem to be able to do. I

always say like what separates a great

scientist from a good scientist is

they're both technically very capable of

course. Um but the great scientist is

more creative and so maybe they'll spot

some pattern from another subject area

that can be uh uh can sort of have an

analogy or some sort of pattern matching

to the area they're trying to solve. And

I think one day AI will be able to do

this, but it doesn't have the reasoning

uh capabilities and and some of the um

uh uh thinking capabilities that um are

going to be needed to to make that kind

of breakthrough. Um I also think that

we're lacking consistency. So you often

hear some of our competitors talk about

uh you know these modern systems that we

have today are PhD intelligences. I

think that's a nonsense. They're not

they're not PhD intelligences. They have

some capabilities that are PhD level. um

but they're not in general uh capable

and that's what exactly what general

intelligence should be of of performing

across the board at the PhD level. In

fact, as we all know interacting with

today's chat bots, if you pose the

question in a certain way, they can make

simple mistakes with even like high

school maths um and and simple counting.

So, uh that shouldn't be possible for a

true AGI system. So I think that we are

maybe you know I would say sort of five

to 10 years away um from having an AGI

system that's capable of doing those

things. Um another thing that's missing

is continual learning. This ability to

like online teach the system something

new um or or some or adjust its behavior

in some way. And so a lot of these I

think core capabilities are still

missing and maybe scaling will get us

there but I feel if I was to bet I think

there are probably one or two missing

breakthroughs that are still required um

and will come over the next uh five five

or so years. In the meantime some of the

reports and the the scoring systems that

are used seem to be demonstrating two

things. One perhaps, and tell me if

we're wrong on this, a convergence of

performance of large language models,

and number two perhaps is a slowing down

or a flatlining of improvements in

performance on each generation. Are

those two statements generally true or

not so much?

No, I mean, we're not we're not seeing

that internally and and um we're still

seeing a huge rate of progress. Um but

also uh we're sort of looking at things

more broadly. You see with our Genie

models and VO models and nano banana is

insane. It's bananas. It's bananas.

Has anyone here can Can I see who's used

it? Has anyone used Nano Banana?

It's incredible, right? I mean, I'm I'm

a nerd who used to use Adobe Photoshop

as a kid and Kai's power tools and I was

telling you Bryce 3D. So, like the

graphic systems and like recognizing

what's going on there was just like

mind-blowing. Well, I think that's the

future of uh a lot of these creative

tools is you're just going to sort of

vibe with it or just talk to them and

they'll be consistent enough where like

with Nana Banana, what's amazing about

it is it's an image generator. It's best

in best, you know, it's state-of-the-art

and bestin-class, but it's one of the

things that makes it so great is it's

consistency. It's able to un instruction

follow what you want changed and keep

everything else the same. And so you can

iterate with it uh and eventually get

the kind of output that you want. And

that's um I think what the future of a

lot of these creative tools is going to

be um and and sort of signals the

direction and people love it and and

they love creating with it.

So democratization of creativity I I

think is really powerful. I remember

having to buy books on Adobe Photoshop

as a kid and then you'd read them to

learn how to remove something from a

from an image and how to fill it in and

feather and all this stuff. Now anyone

can do it with Nano Banana and just they

can explain to the software what they

want it to do and it just does it.

Yeah. I think you're going to see two

things which is that um uh this sort of

democratization of these tools for

everybody to just use and create with

without having to learn, you know,

incredibly complex UX's and UIs uh like

like we had to do in the past. But on

the other hand, I think we and we're

also collaborating with filmmakers and

top creators and artists. Um so they're

helping us design what these new tools

should be, what features would they

want. people like the director Darren

Aronowski who's a good friend of mine,

an amazing director and and he's been

making and his team making films using

VO and some of our other tools and we're

learning a lot by observing them and and

collaborating them. And what we find is

that it's it also superpowers and

turbocharges the best professionals too

cuz they're suddenly um the best

creatives, the professional creatives,

they're suddenly able to be 10x, 100x

more productive. they can just try out

all sorts of ideas they have in mind,

you know, very low cost and then get to

the beautiful thing that they wanted.

So, I actually think it's sort of both

things are true. We're we're

democratizing it for everyday use, uh,

for YouTube creators and so on. But on

the other hand, at the high end, um, the

people who are who understand these

tools and it's and it's not everyone can

get the same output out of these tools,

there's a skill in that as well as um,

the vision and the storytelling and the

narrative style of uh, the top

creatives. And I think it just allows

them, they really enjoy using these

tools. It allows them to iterate way

faster.

Do we get to a world where each

individual describes what sort of

content they're interested in? Play me

music like Dave Matthews and it'll play

some new track. Yes.

Or I want to play a video game set, you

know, in the movie Braveheart and I want

to be in that movie. Yes.

And I just have that experience. Do we

end up there or do we still have a one

to many creative process in society? how

important culturally and I know this is

a little bit philosophical but it's

interesting to me which is

are we still going to have storytelling

where we have one story that we all

share because someone made it

or we each going to start to develop and

pull on our own kind of virtual

I I actually foresee a world and I think

a lot about this having started in the

games industry as a game designer and

programmer is that uh in the '90s is

that you know I think the future of

enter this is what we're seeing is the

beginning of the future of entertainment

maybe some new genre or new art form and

where there's a bit of co-creation. I

still think that you'll have the top

creative visionaries. Um they will be

creating these compelling experiences

and dynamic story lines and they'll be

of higher quality even if they're using

the same tools than the everyday person

can do. But also and so millions of

people will potentially dive into those

worlds, but maybe they'll also be able

to create co-create certain parts of

those worlds and perhaps that you know

the the the main creative uh person is

almost an editor of that world. So

that's the kind of things I'm foreseeing

in the next few years and I'd actually

like to explore ourselves with with with

technologies like Genie.

Right. Incredible. And how are you

spending your time? Are you at is and

maybe you can describe Isomorphic?

Of course.

What isomorphic is and are you spending

a lot of your time there?

I am. So, so I also run Isomorphic which

is our spinout company uh to

revolutionize drug discovery building on

our alpha fold breakthrough in in

protein folding and of course um pro

knowing the structure of a protein is

only one step in the drug discovery

process. So isomorphic you can think of

it as building many uh adjacent alpha

folds to help with things like designing

chemical compounds that don't have any

side effects but bind to the right place

on the protein. And um I think we could

reduce down drug discovery from taking

years, sometimes a decade to do down to

maybe weeks or even days uh over the

next 10 years.

It's incredible. Do you think that's in

clinic soon or is that still in the

discovery phase? And

we're building up the platform right now

and it's uh we have great partnerships

with Eli Liy. I think you had the CEO

speaking earlier and and Novartis which

are fantastic and our own internal drug

programs and I think we'll be entering

sort of pre-clinical phase sometime next

year. So candidates get handed over to

the pharma company and they then take

them forward.

That's right. And we're working on

cancers and immunology and oncology and

we're working with uh uh uh uh places

like MD Anderson.

How much of this requires and I just

want to go back to your point about AGI

as it relates to what you just said.

models can be probabilistic or

deterministic and tell me if I'm

reducing this down too simplistically

that the model takes an input and it

outputs something very specific like

it's got a logical algorithm and it

outputs the same thing every time and it

could be probabilistic where it can

change things and make selections the

probability is 80% I'll select this

letter 90% I'll select this letter next

etc um how much do we have to kind of

develop deterministic models that sync

up with for example the the the physics

or the chemistry

underlying the molecular interactions as

you do your drug discovery modeling. How

much are you building novel

deterministic models that work with the

models that are probabilistic trained on

data?

Yeah, it's a great question. Actually,

we for the moment and I think probably

for the next 5 years or so, we're

building what maybe you could call

hybrid models. So, alphafold itself is a

hybrid model where you have the learning

component, this probabilistic component

you're talking about which is you know

based on neuronet networks and

transformers and things and that's

learning from the data you give it uh

you know any data you have available but

also to in a lot of cases with biology

and chemistry there isn't enough data to

learn from. So you also have to build in

some of the rules about chemistry and

physics that you already know about. So

for example with alpha fold um the angle

of bonds between atoms. um so and make

sure that the the alpha fold understood

you couldn't have atoms overlapping with

each other and things like that. Now in

theory it could learn that but it would

waste a lot of the learning capacity. So

actually it's better to kind of have

that as a as a yeah as a as a constraint

in there. Now the trick is with all

hybrid systems is and and Alpha Go was

another hybrid system where a neural

network learning about the game of Go

and what's what kind of patterns are

good and then we had Monte Carlo

research on top which was doing the

planning and so the trick is how do you

marry up a learning system with a a more

handcrafted system bespoke system and

actually have them work well together.

Uh and that's that's pretty tricky to

do.

Does that sort of architecture

ultimately lead to the breakthroughs

needed for AGI do you think? Are there

deterministic components that need to be

solved?

I think ultimately you what you want to

do is um when you figure out something

where this one of these hybrid systems,

what you want what you ultimately want

to do is upstream it into the learning

component. So it's always better if you

can do endto-end learning and and and

directly predict the thing that you're

after from the data that you you you're

given. So um so once you figured out

something uh using one of these hybrid

systems, you then try and go back and

reverse engineer what you've done and

see if you can incorporate that learning

uh that that that that information into

the learning system. And this is sort of

what we did with Alpha Zero, the more

general form of Alph Go. So Alph Go had

some uh Go specific knowledge in it. But

then with Alpha Zero, we we got rid of

that including the human data human

games that we learned from and actually

just did self-learning from scratch. And

of course then it was able to learn any

game not just go.

A lot of hype and hoopla has been made

about the demand for energy arising from

AI. Uh this is a big part of the AI

summit we held in Washington DC a few

weeks ago and it seems to be the number

one topic everyone talks about in tech

nowadays. Where's all this power going

to come from? But I ask the question of

you, are there changes in the

architecture of the models or the

hardware or the relationship between the

models and the hardware that brings down

the energy per token of output or the

cost per token of output that ultimately

maybe say mutes the energy demand curve

that's in front of us or do you not

think that that's the case and we're

still going to have a pretty kind of

geometric energy demand curve? Well,

look, interestingly again, I think both

cases are true in the sense that uh

especially us at Google and at DeepMind,

we we focus a lot on very efficient

models uh that are powerful because we

have our own internal use cases of

course where we need to serve say AI

overviews to billions of users uh every

day and it has to be extremely

efficient, extremely low latency and

very cheap to serve and and so we've

we've kind of pioneered many um

techniques that allow us to do that like

distillation where you sort of have a

bigger model internally that trains the

smaller model, right? So, you train the

smaller model to mimic the bigger model.

And over time, you look at the progress

of the last 2 years, uh the model

efficiencies are like 10x, you know,

even 100x better for the same

performance. Now, the the reason that

that isn't reducing demand is because

we're still not got to AGI yet. So, also

the frontier models, you keep wanting to

train and experiment with uh new ideas

at larger and larger scale, whilst at

the same time at the serving side, uh

things are getting more and more

efficient. So both things are true and I

and in in the end I think that from the

energy perspective um I think AI systems

will give back a lot more to energy uh

uh and climate change and these kind of

things than they take in terms of

efficiency of of of grid systems and

electrical systems material design new

types of properties new energy sources.

I think AI uh will help with all of that

over the next 10 years that will far

outweigh um the energy that it uses

today. As the last question, describe

the world 10 years from now.

Wow. Okay. Well, I mean, you know, 10

years uh even even 10 weeks is is a

lifetime in AI. So, um field of 10

years,

right? But I do feel like if we will

have AGI in the next 10 years, you know,

full AGI and um I think that will usher

in a new golden era of science. So, a

kind of new renaissance. Um, and I think

we'll see the benefits of that right

across from from energy to to human

health.

Amazing. Please join me in thanking

Nobel laurate Dennis. Thank you. That

was great. Thank you.

Loading...

Loading video analysis...