LongCut logo

Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

By The MAD Podcast with Matt Turck

Summary

## Key takeaways - **AI's exponential progress is underestimated**: AI progress is doubling task length every 3-4 months, a rate difficult for humans to intuitively grasp, leading to underestimations of its rapid advancement. [00:06], [01:58] - **2026-2027: Autonomous agents and expert-level breadth**: By mid-2026, AI agents are predicted to work autonomously for a full day, and by 2027, they will frequently outperform human experts across many tasks. [04:46], [06:49] - **Move 37: AI's potential for novel insights**: The 'Move 37' moment in AlphaGo demonstrated AI's capacity for truly novel and creative insights, a capability that extends to modern LLMs, though usefulness and interest are key challenges. [10:50], [13:51] - **Pre-training + RL is the path to productivity**: The current paradigm of pre-training followed by reinforcement learning is likely to achieve significant productivity gains and accelerate scientific progress, rather than requiring entirely new architectures. [19:08], [20:52] - **RL unlocks robust agents beyond pre-training**: Reinforcement learning is crucial for developing robust AI agents because pre-training data lacks examples of failure and interaction, whereas RL allows agents to learn from their own behavior and correct errors. [48:02], [50:15] - **Goodhart's Law applies to AI benchmarks**: AI benchmarks, when treated as targets, cease to be good measures of true performance, necessitating internal, task-specific evaluations to avoid 'leaderboard theater'. [54:26], [56:08]

Topics Covered

  • Public Fails to Grasp AI's Exponential Progress
  • Could AI Win a Nobel Prize by 2027?
  • MuZero: Learning to Act Without Perfect World Simulation
  • Goodhart's Law: Why AI Benchmarks Can Be Misleading
  • AI: A Complementary Tool, Not a One-for-One Job Replacement

Full Transcript

The talk about AI bubbles seemed very

divorced from what was happening in

frontier labs and what we were seeing.

We are not seeing any slowdown of

progress. We are seeing this very

consistent improvement over many many

years where every say like you know 3 4

months is able to like do a task that is

twice as long as before completely on

its own. It's very hard for us to

intuitively understand these exponential

trends. If you manage to make everybody

in society 10 times more productive you

know what kind of abundance can we

achieve? What will we be able to unlock

in the next 5 years? I think we can go

extremely far.

>> Welcome to the Matt podcast. I'm Matt

Turk from First Mark. Today my guest is

Julian Shitwezer, one of the world's

most impressive AI researchers. Julian

was a core contributor to DeepMind's

legendary AlphaGo Zero and Muse projects

and he is now a key researcher at

Enthropic. We covered the exponential

trajectory of AI and his predictions for

2026 and 2027, the frontier in

reinforcement learning and AI agents and

the science behind AI creativity and the

famous move 37 from Alph Go. Please

enjoy this fantastic conversation with

Julian.

>> Hey Julian, welcome.

>> Hey Matt, thanks for having me. A couple

of weeks ago, you wrote uh an incredible

blog post uh that broke the internet

entitled failing to understand the

exponential again. What is it that so

many people are missing about the

current trajectory of AI?

>> Yeah, it's funny that you bring up that

blog post. I really didn't expect it to

blow up that much. I actually had the

idea when I was on holiday in Kyrgyzstan

a few weeks ago on a very long car ride

and then I started thinking about this

and like all the talk about oh AI

bubbles I seen on X and like you know

this discussion and it seemed very

divorced from was you know what was

happening in frontier labs and what we

were seeing and that made me start to

wonder a bit like is it that things are

moving so fast that people maybe

struggle a bit to extrapolate and

understand intuitively.

Oh, you know, maybe it's far away now,

but you know, it's doubling every so

many months, which means that once it

gets close to us, it's going to move

past and become really good very

quickly. And that reminded me a lot in

like a different way, but like what

happened during early

>> co

>> where we had a similar situation where,

you know, at the beginning it's like

very few cases. It's like, wow, you

know, it's never going to happen. It's

only a few hundred people. Who cares?

But if you understand the math and if

you look at it, right, it's like, oh,

it's going to double every, you know,

week, two weeks. Clearly, it's going to

be a massive scale. But it's it's very

hard for us to intuitively understand

these exponential trends because it's

just not what we're used to in our

normal environment. And so that's what

got me thinking oh is something similar

happening here with AI right

>> we are clearly if if you're looking at

many benchmarks we have many evaluations

we have we are seeing this very

consistent improvement over many many

years where every say like you know 3 4

months is able to like do a task that is

twice as long as before completely on

its own and so we we can extrapolate

this right and we see that oh in a year

from now maybe two years from now It's

the top models are going to be able to

work completely on their own for like a

whole day or more. Combined with this,

combined with the fact that there's a

huge number of knowledge based jobs in

the economy, knowledge based tasks and

combined that in the frontier labs we

are not seeing any slowdown of progress.

Just extrapolating those things together

over a very short time like you know

half a year, one year already that is

enough to know that there is going to be

massive economic impact. That means if

you look at current like if you look at

OpenAI if you look at Enthropic if you

look at Google

those evaluations

those revenue numbers are actually

fairly conservative. I think some more

thoughts, some things more I've seen

more recently is that it's maybe

actually even more interesting and more

complex that you know while those

frontier labs and frontier models are

clearly very capable and on like an

extreme trajectory,

there are like a lot of other

companies, right, that are trying to

follow into the same AI sphere that may

also have very high evaluation but not

necessarily the revenues to support it.

And so it's possible that there may

simultaneously be like some sort of

bubble in you know the wider ecosystem

while at the same time the frontier labs

on a very solid trajectory having a lot

of revenue making a lot of money. I

think that may be quite of an unusual

situation

>> that in the past you know maybe in the

com bubble maybe people were talking

about like you know the railroad rush

and stuff like this we did not see this

bifocation. So I think it yeah I've been

thinking about this more and I think

it's it's getting more and more

interesting the situation.

>> Fascinating. So you alluded to some of

your predictions or extrapolations for

26 and 27. Do you want to unpack that?

You had you had three of those.

>> Maybe you know calling it my prediction

is giving myself too much credit, right?

I will just say this. If you look for

example at meter eval and you very

naively

extrapolate the linear fit, that's what

you would expect to happen. And so I'm

just going to be humble, right, and say

like, "Oh, most of the time, right, I'm

not going to be smarter than statistical

models, statistical extropolation of

past trends that have been very

consistent. So I'm just going to be very

humble and like despite, you know, what

I might know all about research and

what's happening." Probably like, you

know, the most likely the best

prediction I can make is actually just

follow that data, that extrapolation and

see where it's going to take us. And

yeah, in that case, if you if you roll

this out, if you look at other

benchmarks, I think we would have

something like next year, maybe the

models will be able to work on their own

for a whole day worth of tasks. If you

think of software, right, you might say

like, oh, implement this entire feature,

build out this entire set of the app. If

you think of knowledge work, right, if

maybe do like a whole research report,

this kind of scale. The reason I think

why task length specifically is

interesting is because that's

what allows you to delegate more and

more work to language models to agents.

Even if you have a very clever model,

but if it needs feedback or the

interaction with you very often, then it

really limits what you can delegate to

it. If you need to talk to it every 10

minutes, right? Versus if you have

something that can go for hours at a

time, obviously, right? Then you cannot

just have one copy of it. you can have a

whole team that you delegate tasks to

and manage them. And so I think that's

why it's really critical that the models

are actually smart enough, the agents

are smart enough to work on their own to

correct their own errors to you know

iterate because that's really what

allows you to delegate indeed. Uh test

length and time to complete as the

metric for progress. So by mid206 you

mentioned agents can work all day

autonomously. late 2026 at least one

model matches industry experts across

many occupations and then by 2027 models

frequently outperform experts on many

tasks. So is so it's more time running

and then generalization across the

economy and you mentioned GDP val the

the open AI metric as um as as a

benchmark to to um uh already see the

progress towards multiple professions.

Yeah, I think the GDP is like a super

cool evaluation from OpenAI where they

collected a lot of like you know real

world tasks from real domain experts to

make sure it is actually representative

of what you might do in the economy and

then they evaluated a lot of models on

those tasks. They compared them against

real experts performance to give us like

a really good indication of you know how

close how far are we from having

significant economic impact. So I think

that's that's like a super cool

evaluation. The sort of obvious question

is that GGB val and meter are carefully

uh designed benchmarks. How do they

predict production value once you add

compliance, liability, messy data, the

messy world, tool friction and and all

the things.

>> So I think like messiness and task

length like you know time duration that

you're able to work independently are

very similar or very correlated.

So I think that's why it's interesting

that meter tries to measure how long can

the model go on its own because if you

know if you think about like you know

how do you come up with a task that you

know takes a human 8 hours 16 hours

right

you will have to include all these

messiness and all this real world mess

to even be able to measure it but I

think you know ultimately to go further

we really want benchmarks we really want

evaluations that come from the actual

users whether it's the industry whether

it's private users

because that's that's what ultimately

matters, right? Like is the model

helpful to you? Do you get something out

of it, does your paperwork, helps you

write something, fixes your codes, helps

you study, right? I think that's the

real proof. If you release a new model,

right, do people start using it more? Do

they really enjoy it?

>> Is there anything that would change your

mind? any kind of signal whether that's

real world adoption or benchmark

performance something that would make

you more um cautious about um that

exponential is there anything that would

change your mind

>> I mean many things yes I think like you

know many of these things are like you

know internal only right I might look at

our model pre-training I might look at

our fine tuning I might look at RL

things you know how do new runs go

compared to past runs do they match our

expectations the scaling continue then

you know I might look at more public

things of like are people actually able

to use those models to be more

productive for example at the beginning

right there's always some adaptation

period of oh you have like a new tool

like cloud code takes you some time to

figure out how to use it but then in the

medium term in the long term do people

keep using it are they getting more and

more productive using it I think that's

like you know one of the things I look

at many many signals I think when you do

RL when you do research

I think you get very much in the habit

of like looking for signals to prove

yourself wrong because you know you

often have ideas that you get attached

to, but that's not a good way to do

research, right? You most of your ideas

are not good and they're not going to

work.

>> So you really want to figure out as

quickly as possible whether this idea is

any good or whether it's actually wrong.

So if you really get into this habit of

like oh finding the fastest thing that

will show that oh no this is actually

not true. So by 2026 2027 in your

extrapolation framework AI becomes as

good as humans. A key question of the

moment is um can it be to which extent

can it become better than than humans?

There's um uh some chatter these days

around uh move 37 and uh whether AI can

create those like alien new path to

think and solve hard problems. So first

of all maybe remind the audience what

move 37 is and then do you think that AI

in its current state uh is is going to

be increasingly able to to provide move

37 type thinking?

>> Yes. So I guess yeah to give background

move 37 that was when we were building

AlphaGo AI program to play the game of

go that was like in the year 2016 I

think and we're playing one of the best

players in the world at the time because

at that time you know no AI program no

computer program had ever beaten the top

human players at go and it was

considered to be you know one of the

most difficult board games sort of like

you know a real test of intelligence.

Move 37 happened during the second game

of the 5K match where Alph Go played

like a really unexpected unconventional

move that surprised many professional Go

players. I think you know the

commentator said it was you know truly

creative unexpected and then ultimately

Algo ended up winning that game. And so

I think that was for many people an

early sign that

AI is not just you know purely

calculating following an optimal path

but it can also do something that is

truly novel and creative that you might

not expect you know just from imitating

his training data. Yeah, I think I think

that's very relevant in a modern context

as well, right? Because as you alluded

to, there is a lot of discussion of, oh,

are LM just paring the training data?

Can they actually do novel things? For

me, as somebody who has been doing

research a long time, I think it's

pretty clear that these models can do

novel things. And that's why they are so

useful to many people whether it's you

know writing code for you because

obviously right you're not just writing

code that you already have that wouldn't

be very interesting or like helping you

you know write a paper or the way those

models are trained they're literally

trained to generate a whole probability

distribution which means that when we

sample from them we can generate you

know infinite amount of novel sequences

from them. for the question of something

like move 37.

I think there it really comes down to

like you know is it something that is

sufficiently creative and impressive

that we can easily recognize it in the

game of go right that was pretty ideal

conditions because it's you know very

clean very abstract you can really each

move is a very impactful so you can

really see it clearly I think to have

the equivalent for our modern models you

need the combination of a task that is

like sufficiently difficult and

interesting and a model that is both

able to create sufficiently diverse and

creative ideas and also able to evaluate

accurately how good they are so that it

can go down you know increasingly novel

path while making sure that this novel

path is actually you know interesting

and useful. Creating novel things is

actually very easy with language models.

The hard part is creating novel things

that are useful and interesting.

>> Extrapolating this further, there's the

idea of creating novel science. So not

just one move but like a whole new ID,

new concept. What's your current take on

on this? So I think alpha code and alpha

tensor proved that you can discover

novel programs and algorithms. Uh very

recently I think last week there was

some news of uh Google de mind and yell

uh in the biomedical uh field coming up

with with brand new things as well. So

do you think that's uh accelerating and

that AI is in the process of discovering

novel science?

>> So I think we're absolutely at the stage

where you know it is discovering novel

things and we're just moving up the

scale of how impressive how interesting

are the things that it is able to

discover on its own. And so I think it's

highly likely that sometime next year

we're going to have some discoveries

that people pretty you know anonymously

agree that you know this is super

impressive. I think at the moment we're

more at the stage of you know oh it came

up with something but there's debate

about and like but yeah I'm not very

worried because I see this process

continuing and then once it gets clear

enough

there's less need to argue about it. How

far do you think we are from an AI

winning the Nobel Prize?

>> Yeah, I think that's a really

interesting question, right? Because we

had a Nobel Prize for AI with Alpha

Fold, of course. And so I think the next

very interesting point is going to be

when can AI on its own make a

breakthrough that is so interesting that

it would win a Nobel Prize. And I think

my guess for that level of capability

might be maybe 2027.

I think we're we're probably not going

to find out for quite some time

afterwards

because of the delay in getting prices,

but I think by 2027 2028, I think

extremely likely that the models will be

smart enough and capable enough to

actually have that level of insight on

that level of discovery.

>> Amazing. I

>> think yeah Nobel Prize, right? It's like

the Fields Medal for math and all these

kind of like advances. I think you know

that's that's what I'm truly excited

about actually is like AI that can help

us advance science and really unlock you

know both all the mysteries of the

universe and all the improvements in

living standards and abilities for us

that we could have if we understood the

world better.

>> All right. So extrapolating this even

further then we get into the AI 2027

thing that that you you you probably

saw. So this general idea of um if AI uh

can create novel science then AI can

create uh AI researchers and basically

AI can create itself which um

effectively leads to a discontinuity

moment. So I don't know if that in the

blog post that's the singularity or

whatever. uh but does that strike you as

somebody who's uh you know as deep in

the field as possible as something that

is possible in the short term or are

they counterbalancing forces that makes

that path to discontinuity uh harder as

you get closer? Yeah, I think a true

disconnectuity is extremely unlikely

from you know obviously AI researchers

are already using AI to accelerate

themselves and so what's what's already

happening and like what is likely to

continue to happening is that we see

like a smooth improvement of

productivity

and then the main open question is how

does the difficulty of improving AI keep

scaling because a very common effect in

a very common issue in many scientific

fields is that we find all the easy

problems first

and then you know as we continue

exploring the problem the field it gets

more and more difficult to make

advances. So in my mind the main

question is do these two trends balance

each other out so that you know the AI

makes us increasingly more productive so

that as it gets more difficult to make

advances we just sort of about stay on

trend and then we keep improving roughly

linearly or it is still too difficult

and then you know eventually after some

time we still see a slowdown but it

seems quite unlikely to me that we

improve in productivity so much that we

can actually accelerate That would be

very unlike any other scientific field.

The normal course in many scientific

fields is that we actually need to

exponentially increase the research

effort just to keep making progress and

find new insights. For example, if you

look at pharmarmacology discovering new

drugs, it's nowadays in the range of

billions of dollars to discover a new

drug versus you know maybe 100 years ago

a single scientist could discover the

first antibiotic right by accident. It's

not that we will be surprised by sudden

takeoff in progress where you know oh

we're just doing our research and

suddenly our model is like 10x better.

We will be seeing advanced signs of oh

we're making faster progress every

single week. we can see something is

happening. Maybe we decide to pause if

you don't understand what's happening.

>> Do you think the current approach to

modern AI systems which effectively is

pre-training plus RL does that take us

to

where we want to be? Whether we call it

AGI SI, it's unclear what any of the

things mean. But um do you feel that

this paradigm is the right one or do we

need to come up with a different

architecture all together post

transformers or otherwise?

>> I think that's a great question and I

think it hugely depends on what you mean

by where do we want to be. So I think if

you're thinking of oh we want some kind

of system that can perform at roughly

human level in basically all tasks that

we care about productivity wise then I

think yeah it's extremely likely that

the current approach pre-training or you

know transformers is going to get us

there. If what you care about is oh we

want to have a model of intelligence

that is conscious in the same way we are

or you know more abstract qualities like

this I think that's maybe more uncertain

right and I think this is where a lot of

the confusion and disagreement comes

from as you alluded to know AGI ASI like

you know people talk about very

different things and they have very

different things in mind when they say

oh the current paradigm is going to get

there it's not going to get there I

often yeah like to not use the term AI

or ASI and just talk very concretely

about you know what problem are we

solving what task are we solving what

quality are we interested in because I

find that often makes the actual dis

disagreement much more obvious but yeah

I think if you're just thinking of in

terms of like is this going to help us

be massively more productive

is this going to massively accelerate

scientific progress then I think

definitely the current approach we'll

get there

>> and given how extremely deep you are in

I cannot resist asking you sort of the

the the the trendy question uh duour uh

based on Richard Sutton's recent

appearance on Dorish podcast uh do you

think that the models of the future will

be trained in RL from scratch and that

actually having pre-training

um in addition to RL is the wrong way to

go?

>> Personally, I think that's unlikely. Not

not because pre-training is strictly

necessary. I think we may well be able

to train something completely from

scratch as we've been able to do in

other domains.

But more because pre-training

on this vast data sets we have just

brings us so much value that we would

from a practical point of view not want

to give it up.

So we, you know, we might well do some

agents that are trained from scratch out

of scientific interest. It could be very

interesting to learn about what would a

non-human intelligence look like.

But from a programmatic point of view, I

definitely think we would keep using

pre-training data. not just from an

efficiency point of view as well, but

also I think there is interesting safety

angles because by pre-training on you

know all this human knowledge we're

implicitly creating an agent that has

similar values as we do and I think that

is quite valuable for aligning

uh you know highly intelligent agent if

you already start out by caring about

sort of you know the same rough set of

values that makes things much easier

then you create an you know arbitrary

alien intelligence that may have

completely different values. Despite

having, you know, done a bunch of from

scratch RL in the past, I think I'm

often quite pragmatic about this.

>> I'd love to put a pin in that specific

discussion about um alignment uh and

what we do to ensure safety to later in

the conversation because I think that's

a super interesting uh vein. Um, but

maybe to switch tacks uh for for a

minute. Um, I'd love to go into a little

bit of your uh story and then uh the

monumental body of work that you've done

uh at Google Deep Mind before joining

anthropic around uh Alpha Go Alpha Zero

M0. Uh so maybe uh just the 3 4 minute

version of of your personal story from

when you were a kid. What was the path

that led you to become a world-class AI

researcher?

>> Yeah, actually when I was a kid, I

didn't have any expectations of becoming

an AI researcher. I was always very

interested in computers and I grew up in

the Austrian countryside in a small

village. So, you know, it's not like

there was a huge amount of things

happening, but computers were always

very interesting to me. You know, it's

like this connection to the wider world

to all these other interesting things.

And I was very interested in computer

games as well. And I think that's the

first time I became interested in

programming because I wanted to make my

own games

which I think is very common in people

who get into programming. But I somehow

I always got distracted by the technical

aspect of I'm going to build a very

general game engine that you know can

run any kind of game. And so I never

actually ended up making any game. I

learned a lot about making game engines

and different technologies and that's

how I ended up studying computer science

eventually in Vienna. Yeah, there was

like a classical computer science degree

and then by chance after my first year

in my first summer holidays I had an

internship with Google

and that's when I realized oh wow these

guys are doing really interesting things

they you know that's where their big

clusters the tens of thousands of

machines are that's first time I

radically changed my plans from wanting

to stay in academia and you know I had

originally thought oh maybe I do a PhD

That's when I changed, oh no, actually I

just want to join these guys at Google

and I will finish my degree as quickly

as possible. And so that's actually yeah

when I got my full-time position at

Google, finish my degree the next year

and then move to London. So I was just

working as a normal software engineer at

Google working actually like in

advertising which I wasn't super excited

or interested in.

So like the technology was interesting,

right? is like these huge systems and

Google has you know famously great

technology but actually after you know a

yearish of this I was pretty

done on board of advertising and so I

was actually planning to leave Google

and thinking of maybe joining a hedge

fund going into finance when by chance I

saw an email in my work inbox that this

guy Demis was going to come to the

office and give some talk about Atari

and video games and AI

And it was actually a day off because I

was visiting a friend somewhere else in

England.

>> But that email looked so intriguing that

I was like, "Oh no, I'm going to have to

like take the train back to the office

right now and like see this talk."

>> And yeah, like I'm really glad like that

I saw this email and I did go back

>> because that's like the moment where I

decided, oh no, like no, I'm not going

to go into finance. I'm going to move to

Deep Mind.

>> I'm going to join these guys because

this looks clearly, you know, super

interesting, super amazing. They are

doing really interesting research.

>> All right. Tell us a story of um Alph

Go, Alph Go Zero, Alpha Zero, Mu0.

Uh because it's it feels like it's

fundamental AI knowledge that everybody

who has an interest in the space should

know about should understand the

progression in in particular. So um

starting with the beginning of Alph Go,

you alluded to it a second ago, but like

what did it do? what how was it trained

and then how that how did that evolve

with each version

>> alpha

>> go I think at that moment in time go in

the machine learning community was this

really big target where everybody felt

like oh you know it's this big unsolved

challenge

image ImageNet had just happened before

so clearly you know models were starting

to do something with images and being

able to recognize them and predict them

and if you look at the go board you know

the right way it looks a lot like one of

those images that you classify. So there

was a lot of momentum around using

neural networks to somehow play go and

then at the time David Silver and Ashang

deep had been working on go I think like

both of them had been working on go for

quite a while had published some very

interesting papers

and that's when the idea of using Monte

College research with deep networks came

together so the idea was that to train a

deep null network to predict which moves

you might want to play

and you know whether you're winning or

losing the game and then use the tree

search to really make a big plan of what

are all the possibilities in the game.

How would it go for you if you chose a

certain move or a different move? How

would the opponent respond?

>> And to explain this in super plain

English, uh the term search in this case

is as you said is research is not what

people normally think of search which is

searching a corpus. This is searching a

series of options effectively. Is that

is that the right way to think about it?

>> Yes. It's it's quite literally what you

might do when you play a game of chess

when you play any board game. It's quite

literally thinking of you know what move

am I going to do? What move is my

opponent going to do in return and then

thinking about many possible moves like

that and mapping out all the

possibilities in the future.

>> So deep learning plus search. What was

Alph Go trained on?

>> Initial training phases of Alph Go were

on some human amateur games if I

remember correctly.

>> Mhm.

>> So basically just predicting if you have

humans playing many games of go try to

predict at each turn in the game what

move would they have played? And it

turns out that you know if you train a

deep network to do that you can get

something pretty decent like amateur go

level.

>> Mhm. Mhm. but not good enough to

actually beat a really strong player.

>> And by the way, just for the lore of it,

uh did you guys have any uh sense that

uh was going to crush Lee Doll? So, the

the famous Go player that you mentioned

earlier in the conversation? Was it was

it obvious before? Was that a surprise?

>> We thought we had a pretty good chance,

but we were very nervous about like, you

know, are we going to win? Are we not

going to win? Are we going to lose?

Yeah. We actually had some bets

beforehand of like how many games we're

going to win or lose. Like I think it

was very ambitious to put the match as

early as we did. If we had wanted to be

a bit more safe, we may have like tried

to do a few months later. And I think if

you had done it a few months earlier, we

would have probably lost.

So it was very knife edge of I guess

which also made it much more interesting

for us, right? Because

>> it really means that each game is like a

nailbiter of oh what's going to happen?

Are we going to win? Are we going to

play you know dumb move? What's going to

happen? So that was very exciting.

>> Alph Go Zero which was I believe the

year after. How was that different? What

was the progression?

>> Main change between Alph Go and Alpha Go

Zero was to remove all the human Go

knowledge. So instead of starting by

imitating human go games, we were

training it just from scratch playing

only against itself and rediscovering

basically all go completely figuring out

from scratch how to play.

>> Did you give it the rules of the game?

>> We didn't give the rules of the the game

to the network per se, but we used the

rules of the game to score the result.

So basically you know he would play and

he would tell it you know who won who

lost or you know you cannot make this

move.

>> So the next hop was uh alpha zero which

was a year or two later. How is that

different? So alas zero the idea was

well obviously go is really beautiful

game but ultimately we would like to do

something more general right so can we

remove anything that is go specific and

verify that the algorithm can actually

solve more problems and in that case we

did that by trying to solve both chess

go and shroggy which is a Japanese chess

basically with the same algorithm you

know the same network structure

just by running it in different games

and also making it you know much

simpler, elegant and faster. So

basically that was really laying the

groundwork for applying the algorithms

to solve real problems. And then the

next um stop in the journey was Mu0 and

and just to bring it home for uh people,

you were uh I believe second author on

AlphaGo Zero and you were the lead

author on M0 which um in the world of of

I'm I'm sure you're going to be very

humble about it, but like in the world

of AI is like as as big a deal as it

gets. So um I'll say it so you don't

have to say it. Um so Mero, what was the

next um what was how was that different?

So the main motivation I had for making

M0ero was that if you want to solve many

real world tasks, you have no way of

perfectly simulating what's going to

happen. And you know, if you play a

board game, obviously you know if you

make this move, you know what's going to

happen. It's like the piece is going to

go there, it's going to take a piece,

whatever, right? But if you actually

want to solve something like a robotics

task

or anything more complicated,

it's impossible for you to simulate

what's going to happen accurately. And

also we as a human we don't do this

right we just imagine in our head of oh

if I'm going to say this then he's

probably going to respond in that way

this meant that alpha zero as it was

could not be applied to such problems

because it required some way of you know

simulating the game scoring the outcomes

and the idea with m0 was that well we

already have a deep neural network right

these networks can learn a lot of things

so why not let it why not teach it to

predict the future of the environment,

the future of the world.

Why not make the model be able to learn

for itself, what is going to happen

after each action it takes?

>> After that, you also uh applied this to

uh code and math. So that was alpha code

and alpha tensor. So like zooming out a

little bit that evolution of um

reinforcement learning in games and then

code and then math. What did you learn

about the general power of search and

learning that is today relevant in

modern agentic AI systems? How did that

that whole body of work translate to

what uh you are doing today?

>> So games are a really good sandbox to

learn very quickly about a lot of the

reinforcement learning science. you know

the algorithms that work well, the kind

of problems that we encounter,

the even from a technical point of view,

how do we build a learning system that

spans, you know, many data centers, uses

tens of thousands of machines because

games are very clean sandbox, very clean

environments, so we can make many good

experiments. And then now that we have a

much more general model, right, the

language models can do almost any task,

but they're much more complicated.

They're much slower to experiment with,

we can apply those same lessons of ah,

you know, we know how to build a really

robust reinforcement learning

infrastructure. And now we can build the

same one for language models or like you

know we know if you do this kind of RL

then the model will learn how to exploit

the reward and so we can apply the same

lessons the same mitigation techniques

to the language models.

>> If I understand correctly I think Muse

had um a learn world model.

>> Mhm.

So basically sort of rehearse the future

for for lack of a better expression. Uh

so do do modern LM agents have have

anything like that? Do they have an

internal world model that lets them

preview actions before they commit? So I

think yes I would say that language

models have an not an explicit world

model but they do have an implicit model

of the world because to be able to

predict you know what is the next likely

word in this sentence how is this

paragraph going to continue they need to

internally model you know what is the

state of the world that makes this

person say that thing and so it's it's

actually somewhat similar to m0 in the

sense that mu0 zero also only had an

implicit world model. You know, it was

never trained to predict, you know, what

does the screen actually look like if

you take an action.

It was also only trained to implicitly

predict if I take this action, you know,

what is the next action I should take or

is it going to be good or bad for me. So

in both those cases, you have an

implicit representation of the world in

your model that you can use to make

predictions, but you're not actually

reconstructing the full state of the

world because

reconstructing the full state of the

world, you know, that can be very

expensive and complex.

>> If you think about, you know, super high

resolution video, audio signals is a

very large amount of data that probably

you don't actually need. If you think of

human attention,

we are only aware of a very small subset

of what's actually going on all around

us all the time because that's you know

the most relevant information that we

actually need to make decisions and that

goes back to the the prior discussion

about so retraining. So the reason why

pre-training and RL work well together

is that you have that world model that's

um implicitly embedded into the the the

the corpus. Although the argument

against it is that it's what humans

think the world model is uh as uh

embodied by language versus what the

world model actually is. And that's that

that's my understanding of the of the of

the of the debate. I mean for the debate

I think different people have different

points of view so I don't want to speak

for anybody but

>> yes yes yes

>> but yes I think like pre-training on

this rich knowledge gives you some

representation of the world already so

that when you actually start to act and

interact with the world you can very

quickly make you know meaningful

decisions meaningful actions I yeah I

like to think of it you know in the

similar way if if you look many animals

when they are born they very quickly

know how to move how to run even Right?

If you look at gazels for example in the

savannah in a way that is like you know

clearly they did not have time to really

learn this from scratch right few

minutes or hours and you know in their

cases they did not do pre-training but

they have some evolutionary encoded

structure in their brain

>> because clearly it is very beneficial to

have some sort of knowledge to make your

learning more efficient.

>> Yeah. just RL in nature would uh would

lead to not so good results. Like if

you're a gazelle and like you have to AB

test whether to run towards the lion or

away from the lion.

>> Exactly.

>> It's like the you know thousands of

generations of gazels acquired this

knowledge over time.

>> Y

>> it was encoded in their genes and their

brain structure in some way right and

then you get to start on top of that. I

think the main you know the main

challenge or the main thing you need to

watch out for is that you don't

overenccode or you don't restrict your

search base too much. If if your

pre-training if your prior knowledge

prevents you from exploring something

that might be the correct course of

action that will be bad. So there you

know there is some danger there you have

to be aware of.

>> So this general idea of making

pre-training and RL work together in

modern AI systems seems to be the the

big idea or topic of 2025. Although of

course I know it's it's it's been years

in the making. Uh why did it take so

long? U

it feels like RL you know progressed uh

in in in its own direction and then

pre-training worked in its own direction

and those were slightly separate. Why

did it take take so long to put them

together? Is that just purely practical

and economic or or anything else?

Scaling up the language models to the

massive degree that we scaled them up

took a lot of effort on its own. And

from a science point of view, from an

engineering point of view, retraining

and supervised training is more stable

and sort of easier to debug

because you don't have this feedback

cycle. you basically, you know, have a

fixed target and you're trying to learn

this target and so then you can focus on

like, you know, is my training working

and like is my infrastructure working

and then, you know, does it scale as a

fallover

versus if you compare to RL in RL you

have this feedback cycle of oh I learn

something and then I use that to

generate my new training data and then I

learn from that training data and now if

you have you know something is not

working it's very hard to figure out

where in this cycle your problem is

coming from. No, maybe your training

update was bad and that's why you

suddenly started behaving badly

or maybe the way you decided, you know,

the way you select actions to behave is

not correct and so you generate a bad

training data and that's what messed up

everything.

>> So it's just much more complicated to

get working correctly. And so I think it

makes a lot of sense to you know first

scale up the pre-training the

architectures figure out something that

works pretty well especially if you can

already get pretty far by some fine

tuning some prompting and then when you

know when it's clear that these models

are really general they are really

useful and we have them in pretty stable

state then you know you can ramp up RL

and take them even further even in our

own work right if you look at alpha go

alpha zero we always follow a similar

split as well when we first set up the

architecture of the network, the

training using fixed supervised data.

And only when we had that working really

reliable, only then did we do the full

RL loop and the full training just

because like debugging all of it at the

same time, you're just setting yourself

up for failure. It's really useful to be

able to isolate the component and say

like, you know, I have known good data

over here. I have a known good target

there.

If the thing in between is not working,

I can isolate it. And then, you know, we

can isolate all parts of the system. How

compute intensive is it to scale RL and

are there scaling laws for RL the same

way you do in pre-training?

>> There's less published literature about

it.

But I think if you if you look at all

the RL literature over time, we see very

similar returns on compute in

pre-training and in RL where we can

invest exponentially more compute in RL

and keep getting benefits. There's going

to be some interesting research to come

to figure out what are the trade-offs

between pre-training and RL compute. we

know how what what should be the split

for a big model for example it could be

50/50 should be like 1 to 10 which way

should it be 1 to 10 so I think that's

going to be extremely interesting but so

far yeah we definitely see good returns

on both

>> what's the latest stateofthe-art or or

thinking uh in the field of um rewards

so in what you described uh for alpha

zero alpha go that was basically win

loss as a reward board. Then it sort of

feels like we went into kind of like

fuzzy human matching. This is good, this

is not good. And now that we expand um

as as as per the the above into more

general fields where it's sort of

unclear whether you win or lose, uh how

does how does that work? What what parts

of the evolution are you working on? Are

you excited about?

>> Personally, I don't work that much on

reward modeling. I mostly work on sort

of reasoning, planning, search time,

search compute, ways of making the model

smarter by spending more computation.

Yeah, thinking about rewards.

I think the reinforcement learning

process per se doesn't really care where

the reward comes from. The algorithms

are very happy to use any source of

reward. Whether that's like a human

feedback signal, it's like some

automated signal from like you know

winning, losing the game or passing a

test. Whether it's something more model

generated for example anthropic we had

this paper about constitutional AI to

you know have the model itself score

whether you're following some

guidelines.

>> Mhm.

>> So it can be very flexible to what kind

of reward you follow. the RLVR like all

all the things are those those are at

this stage um stuff that that you see

commonly common commonly uh used um any

any many any thoughts

>> yeah I think we're seeing like huge mix

of rewards and environments and I think

it's very much much people working very

hard in figuring out what are the best

reward sources and how do we scale it up

and how do we you know get more rewards

more reliable rewards that will be one

of the key ingredients in scaling up RL

further.

>> And so switching from from rewards uh

what is the latest thinking uh in terms

of uh data training data for RL uh again

following the uh evolution from like

AlphaGo where it used to be human data

and then like self selfplay uh what how

does that uh how does that work where

does the data uh come from and what kind

of data works best to train modern RL

>> yeah I guess the great thing about RL is

that the data is generated by your model

itself.

So the smarter our models become, the

better RL data we can generate,

the more interesting and complex tasks

they can solve, which then gives us more

and more data that we can train on cuz

like you know the more the more complex

the task, the longer it takes to solve

the task and the more data it generates

that we can then use for training. I

think part of the challenge is to find

tasks that are really representative of

what people actually want to do with the

model because now language models are so

general people are using them for so

many different things. There's more and

more of a challenge of you know we need

to cover as many of those as possible in

a to make sure that you know the model

is actually able to do this diverse set

of tasks. What uh matters more for

training data? Is that quality? Is that

quantity? Is that recency?

>> I think that's like a very interesting

question that

maybe doesn't have like a super clear

answer yet or it's maybe still

interesting research to be done. I think

we've seen papers arguing for different

things or we've seen different benefits,

right? Like clearly we see pre-training

as we scale up the data. we can keep

improving but we've also seen very

interesting fine-tuning results papers

published for with a very small amount

of examples you can teach the model how

to do an interesting skill and I think

we don't have any good scaling laws yet

that tell us the trade-off especially I

think because it's very hard to measure

what is the quality of a data point

right like how good is this example

compared to this other example without

being able to measure this it's very

hard to quantify the trade-off in any

Okay, I think intuitively it's

definitely true that if you have bad

data, RL doesn't work that well and if

you have very high quality data, it

becomes much more stable. For example, I

think that was like a it's very clear in

like Alpha Zero days where you we spend

Alpha Zero spends a lot of computation.

It does a lot of planning and search to

decide which move to take and so that

generates very high quality data to

train on which then resulted in RL

training that was incredibly stable. So

you know you can run it across

continents

take a long time to generate the data

and then train on it and is very robust

versus in modern RL with language

models. You know the difference in how

good is the model and what data it

generates that we then train on is not

so large.

because we more directly sample from the

model and then train it

which then results in reinforcement

learning that is less stable. And so one

direction of scaling RL and making it

more stable

is by improving this by for example

putting more reasoning into your

language model to generate much more

high quality training data that can then

give us training that is much more

stable and that we can scale up much

more easily. I'd love to spend uh a

little bit of time now on the general

topic of RL and agents. So the famous

augent AI that everybody's been talking

about uh you know breastlessly for the

last year. Uh so for people listening

and you know as as often in an effort to

make this broadly accessible by by by

you know a group of general people in

tech um could you drive home the the uh

sort of intersection and overlap between

RL and and and agents? Does RL power

agents? How does that work?

>> Yeah. So I guess like maybe first let's

take a step on what do we actually mean

by agent?

>> Yes.

>> As compared to like as like a general

language model, right? The second most

debated question after agisi is what is

an agent? Yes,

>> I guess. Yeah, for our purposes, let's

just say that an agent is an AI that can

act on its own. You know, maybe take

some actions on a computer,

save some files, edit some files, send

an email, whatever you want, right? But

the main characteristic is that it

doesn't have to interact with the user

all the time. It can do things on its

own. The reason why RL is very important

for this actually connects back to

pre-training

because our pre our pre-training data is

not very agent-like. If you think of the

pre-training data right there is like

websites and books and you know all

kinds of recent text

that has a lot of information

but it doesn't have a lot of actions.

It doesn't really capture how the humans

actually interact with the world. So if

you take a raw pre-trained model, it's

not a very good agent.

You know, maybe you can prompt it a bit

and like, you know, sort of push it in

the right direction, but it's not going

to be very good at interacting and

especially it's not going to be very

good at

correcting for its own errors because

the pre-training data has no examples at

all of how is our agent going to fail.

And that's exactly where reinforcement

learning comes in because in RL we can

take our agent let it interact with the

environment and then directly train on

that interaction. So for example, if the

agent did well, we can reinforce those

actions. And if the agent did badly, we

can push it away from those actions. And

if the agent sort of did badly at the

beginning, but then recovered and

managed to well, then we can also

reinforce that recovery. And so that's

super important because it allows the

agent to actually learn from, you know,

its own distribution of behavior.

>> Mhm. And that just makes it much more

robust because now he doesn't have to

generalize to something he has never

seen before. He can actually learn you

know on the actual problem that is

trying to solve. And that's why you know

RL is really unlocking so much agentic

capabilities. Now

>> if I'm an AI builder today building an

AI app and I I build it on top of

anthropic. Uh anthropic is whatever

model is going to come with some of this

sort of batteries included. uh but as a

builder on top do I need to do my own RL

uh there is this emerging space of like

RL as a as a service where you know for

this task or that task that I build on

top of a general model that sort of like

offers you ability to do RL or or can I

do a lot of damage just through prom or

like maybe like supervised fine-tuning

first

>> I think nowadays with the capabilities

of like you know topic cloud models top

OpenAI GPT models. You don't need to do

any fine tuning. You can take the model

as is,

write your own tools, your own harness,

and benefit from that agentic training

because doing good agentic fine tuning

is actually very hard. And so it's it's

quite hard to do better than the top

frontier models that you might get. But

on the contrary, coming up with good

tools and a good represent

representation of your task makes a huge

difference. So like you know depending

on how you express your problem for the

model can make it way harder or way

easier and so you can get a lot of

mileage out of that.

>> What's currently missing to achieve the

big dream of a gentic AI? Is it model

capabilities at the core or is it sort

of like boring uh quote and of quote

engineering around reliability tool use

safety uh what needs to happen? I think

we have sort of there's basically

improvements needed around the whole

space.

Make you know the model better able to

correct its own errors. Make the model

better able to continue going for long

times without getting distracted.

Making the model just smarter in

general. Maybe making the model faster.

Like there's basically like, you know, a

whole set of things that we know that we

can improve.

There's probably yeah not one individual

blocker

and that's why we will continue to see

sort of smooth incremental progress over

model releases but sort of given

how many things we know there are that

we can do better on and improve. Yeah,

I'm quite excited about where models are

going to end up. I think that's

actually, you know, one of the reasons

why AI is a very fun field is that there

are so many lowhanging fruits that, you

know, you can do much better on, but

already the current models are so good

that it's very fun to work on it. It's

like, oh, I can fix this thing. It'll be

even better

>> versus, you know, if you're in a place

where everything has already been solved

and it's like really hard to figure out

how to make it better, it's a very

different story. Let's spend a minute on

on eval uh and um we we we touch upon

this a little bit but just to give it

some some some proper space. So there

was um you know in your blog post that

we talked to at the very beginning of

this conversation this this this concept

of external benchmark and then you

quoted your piece goodart law. Uh so

what does first of all what is good hard

law and then uh how should labs compare

results uh so that it doesn't end up

with this kind of leaderboard theater

that we've seen a little bit in the last

you know couple of years.

>> Yeah. As a good law basically says that

any measure that becomes a target stops

being a good measure and you know you

can think of that intuitively that if

you start paying for example programmers

based on how many lines of code they

write well suddenly they will discover

many ways to add more lines of comments

which is you know completely useless and

this is a very yeah general effect that

obviously right if you give people an

incentive that they should optimize they

will try very hard. Yes.

>> And we also see this with language model

benchmarks. Of course, people want to

get promoted. They want to launch their

model. So any benchmark that is too

easily measured or that has a lot of

attention on it, people will optimize

very hard for it. Which means that

probably the model will look very good

at that benchmark. But if you then use

it for your own task, you might get

different performance. Yeah. You ask

about like how what do we do about this?

It's very hard to prevent people from

optimizing on the benchmark.

So one possibility is just periodically

create completely new held out

benchmarks

that nobody has seen before

and that gives you a fairly you know

good estimate of model performance.

So I know for example like a lot of

researchers have their own toy problems

that they use to test all the models

precisely for that reason. So that you

know this is a problem a set of problems

that nobody has seen. you have a pretty

good guess that it's going to give you

an unbiased estimate. If you're like,

you know, an individual of your company

trying to decide which model to use,

it's probably, you know, something

similar. Just make your own internal

benchmark that really represents what

you care about and then measure on that.

And I think that's likely to be the most

objective, most accurate way of

measuring

>> internally. What does that look like at

a place like anthropic or previously uh

deep mind? Uh is there um I mean I know

there there are teams that are focused

on on on evals. How do you how do you

think about what works, what doesn't in

terms of internal evals?

>> It definitely used to be easier to have

good evals. You know 5 years ago the

tasks were doing I think it was easier

to measure model performance. I think

nowadays it's much more difficult and I

think we try to not over rely on evals

so much because it's quite hard for

example to measure you know how good is

this model really at writing code. Yeah,

I think it's one of the big unsolved or

very important problems in the field of

you know making really good evals that

are

both cheap to run

reliable and accurate because

it's easyish to make an evil that takes

one of those but to get all three is

quite hard for example like you know in

the beginning we were talking about open

GDP evol GDP evol and that one is like

you know it's very accurate and unbiased

but it's very expensive to because what

it actually involves is like taking

human experts having them do the task

>> and then compare the model task to the

experts and like you know rate it with

multiple people. So it's very accurate

but it's like extremely expensive to do.

>> And related to that topic of of evals

what what's the latest in terms of our

ability or I should say your ability uh

to truly understand

uh how models work. to the general field

of mechanistic interpretability. You

alluded to the fact earlier that um RL

if I understood correctly sometimes make

it a bit harder because it does uh

things occasionally in a more

inscrutable way. My my words maybe not

not yours. Um so what is the latest and

indeed does RL make things harder or

easier? Oh, so what I meant before is

that debugging RL in general know

completely unrelated to interpretability

>> is harder because there are more moving

parts. But it is also true that if

you're not careful with RL,

you can make interpretability harder.

For example, one

common thing with modern models is they

they do reasoning with the train of

thought. You could look at the chain of

thought to you know see what are the

model internal thoughts and then you

could also have a thought that oh maybe

I should use that as a reward signal in

RL and punish the model if it thinks the

wrong thing but then suddenly you

completely destroyed your

interpretability angle. So you sort of

have to be careful that yeah you don't

do RL on the signals that you want

actually want to use to interpret what

the model is thinking of doing. That

said I think yeah there's some extremely

exciting interpretability things

happening including mechanistic

interpretability. I think like actually

last year I think before Johnic maybe

even there was a super cool golden gate

claw model where you know they found the

neurons in claw that were responsible

for the golden gate concept and then

modified them to make a version of claw

that really love the golden gate bridge

in San Francisco and so that's like a

really vivid example of ah you know we

really understand what's happening in

this model and like you know what better

way is there to verify that

understanding than actually changing the

behavior of the model and so I think

that's like a super important direction

for safety. As the models get smarter,

we really need to be able to understand

what is the model thinking internally.

You know, what is the values it has? Is

it lying to us? Is it actually genuinely

following the instructions? And so I

think like you know definitely extremely

important area to invest in and work in.

I think that especially if like you know

people interested in working AI or doing

AI research, I think interpretability is

a great area to get into. Yeah, perfect

segue for last for the last part of this

conversation. I'd love to to zoom out

and and talk about um the impact of AI.

So if we think that we are on the

exponential and that things are going to

only accelerate from here, what does

that mean? Uh and certainly safety and

alignment which is a core value at

anthropic hopefully in other parts of

the field as well, but like anthropic is

particularly uh vocal about safety and

alignment, let's say. um how how does

that actually manifest? So we just

talked about interpretability. What what

uh for people who are concerned that

this is going too fast and that we

collectively are creating a a monster

quote end of court. Can can you give us

a glimpse into the kind of work uh that

is done for alignment and safety at a

place like anthropic? Yeah, I think like

the focus on sort of safety alignment

pervades all of anthropic and there's

very rigorous processes where we train a

model whenever we want to release a

model both to you know analyze the

capabilities of the model verify the

alignment of the model. Ensure that it

you know does not do harmful things on

its own. ensure that it does not enable

you know malicious users to do harmful

things and to the point where if we are

unsure about the safety of model we will

delay the launch and like you know until

we're sufficiently sure that is actually

harmless

we will not launch and release a model

which you know may you know

I guess shows that you know people take

the safety much more seriously than any

financial return or revenue. I think

yeah also in terms of research and

resources

the teams working on safety and

interpretability are a big focus of the

company

which you know gives me a lot of

confidence that we're actually care

about this and put a lot of effort into

it. and at a more technical level and to

uh tie back an earlier part of the

conversation um around uh when when

we're discussing the you know

pre-training and and and and safety. So

is safety and alignment an RL problem?

Uh and by that I mean uh the the beauty

of having pre-training uh is that you

import that world model as we're

discussing but arguably you also import

into your brain uh a lot of bad stuff if

you collect data from the internet as we

know uh there's good things but also a

lot of toxic content. So is uh alignment

largely using RL to get rid of the bad

stuff that is built into the

pre-training.

>> We can definitely use RL to like shape

the model behavior and ensure that for

example given adversarial given bad

input it sort of behaves safely or knows

that he can refuse or is you know robust

who attempts to pack the model. Yeah, I

wouldn't view it alignment just like an

RL problem. I think it sort of it goes

throughout the whole stack. You might

you know for example filter the

pre-training data in some way. You might

after training you might have

classifiers

that like you know look at the model

monitor the model behavior to ensure

that it is actually aligned.

You might when you write the system

prompt for the model that you use. You

might put safety guidelines in there. So

I think safety alignment it really

pervades the whole of research and the

whole of you know product and deployment

it's not just isolated into any one part

>> and then another super interesting topic

in the same vein of like the impact of

AI is obviously the discussion around

jobs. So if as per the GDP discussion

the uh agents are becoming just as good

or better than humans obviously what

does that mean uh for all of us in terms

of our jobs? what what have you learned

um after the experience of Alpha Zero,

Alpha Go uh that that could give us a

glimpse into uh what may happen once we

all have super powerful agents do our

jobs.

>> So I think the first thing that we

didn't talk about yet so far is that

artificial intelligence

is quite I mean this may sound a bit

simplistic but it's quite different than

human intelligence. So we can see that

right that the model may be much better

at us on some tasks like you know

calculation obviously and like much

worse than us at other tasks.

So it is not I don't think it is at all

going to be any like one for one

replacement. It's going to be much more

complimentary of you know the model is

really good at something that maybe I

really don't like doing or I'm not

interested in or I'm very bad at and

then I'm much better than the model as a

mother part. And so I think it's going

to be like a gradual process of we're

all going to incrementally start using

models more and more to improve our own

productivity

rather than you know have a model that

one for one is able to do exactly the

set of things we can do. And so for

example you know I use cloud all the

time to for example you know refactor

code or maybe write some front end code

that I don't want to write or at the

same time there's other parts where I'm

clearly much better putting than cloud

still. So there is a synergy of you know

use the best most productive skills I

think I guess economists call it like

comparative advantage but like there is

this you know long process of we'll both

sort of improve our productivity

incrementally and I think that process

is going to give us some time of figure

out politically and figure out

economically how do we want to you know

benefit from this massive productivity

increase you know even independently

from AI, right? The promise of

technology has long been that, oh, we're

going to be all so productive, so

wealthy that we need to work much less.

>> Yet, mysteriously, right, we all have

like 40 hours working week for decades.

And so, you know, I think it's much more

like a political social problem of like

figuring out how do we actually benefit

from all these improvements and like,

you know, bring the increases in wealth

and productivity to everybody and it's

much less a technological problem.

>> Mhm. also means that we can't really

solve it with technology.

We have to solve it at like a sort of a

democratic political level. How do we

spread these benefits?

>> Do you think that that uh increases

inequality? So, uh as you think about

the impact of Alph Go and and and and Mu

Zero, what happened to the top go

players and what happened to the top

chess players? Did they do they

disappear or did they get enhanced and

better? Yeah, I think at least in the

case of chess and go there has been like

more interest and it has become much

easier for people to study how to play

go how to play chess cuz now you don't

need to find you know an expert tutor

you anybody can practice on their own

right spend a lot of time and I guess

like chess streamers are very popular on

Twitch right here and similarly right

like a lot of students are using

language models to study I think also

for coding right

cloud code these agents they raise the

bar of what anybody who has an idea can

accomplish on their own. I think you

know the larger picture whether it

increases or decreases inequality is

quite hard to forecast. It both sort of

raises the floor of what any person can

accomplish but it also gives

very productive people an ability to be

even more productive. It's possible that

we see quite a difference between

countries depending on the taxation

social redistributive system that they

have in whether inequality increases or

decreases for example overall I'm quite

excited that it is very much nonzero sum

is very much you know increases the

total wealth available in society I

think if you think about progress if you

think about prosperity that is the most

important thing like redistributing the

pie is kind of a losers game. To get

more wealthy, we really need to grow the

pie. You know, if you think of the

agricultural revolution, the industrial

revolution, the reason why we have much

better lives nowadays

is because, you know, we're so much more

productive. We have so much more wealth.

And so that's the key step we want to

unlock. If we manage to ma make

everybody in society 10 times more

productive

you know what kind of abundance can we

achieve? I think that's the key

question, right? What advances does that

unlock in medicine? You know, curing

diseases, halting aging. What does it

unlock in terms of energy? Obviously,

right, we have like climate crisis. We

need more energy rights to sustain our

lifestyle.

What advanc advances in material science

can we have? All of those are basically

bottlenecked on how much intelligence we

have access to and how can we apply it.

So I'm yeah incredibly optimistic about

like what will we be able to unlock in

the next 5 years.

I think we can go extremely far.

>> Well that feels like a a wonderful place

to leave it. Uh thank you so much

Julian. This was absolutely fantastic.

Thank you for spending time with us.

>> Yeah thank you for all the exciting

questions and uh giving me the time.

Hi, it's Matt Turk again. Thanks for

listening to this episode of the Mad

Podcast. If you enjoyed it, we'd be very

grateful if you would consider

subscribing if you haven't already or

leaving a positive review or comment on

whichever platform you're watching this

or listening to this episode from. This

really helps us build a podcast and get

great guests. Thanks, and see you at the

next episode.

Loading...

Loading video analysis...