LongCut logo

Ex-Google CEO: What Artificial Superintelligence Will Actually Look Like w/ Eric Schmidt & Dave B

By Peter H. Diamandis

Summary

## Key takeaways - **Digital Superintelligence is Imminent**: Digital superintelligence, defined by AI's ability to generate its own scaffolding, is expected within the next decade, with significant advancements predicted for 2025. [00:04] - **AI's Unprecedented Energy Demand**: The AI revolution requires an estimated 92 gigawatts of additional power in the US alone, a demand that current nuclear power plant construction cannot meet in time, highlighting a critical energy bottleneck. [02:31] - **AI Will Automate Programming and Math Tasks**: AI is on the cusp of replacing most programming and mathematical tasks due to their limited, scale-free language sets, leading to the emergence of world-class AI mathematicians and programmers within the next two years. [12:48] - **The Proliferation Problem of AI Models**: The rapid advancement and potential for AI models to be miniaturized and distributed globally, especially through open-source initiatives, poses significant proliferation risks, making control and oversight a major challenge. [19:34], [41:36] - **AI's Impact on Jobs: More Opportunities, Higher Pay**: While AI will automate dangerous and low-status jobs, it will ultimately create more, higher-paying jobs by increasing overall productivity and wealth, with human assistants augmenting capabilities rather than replacing them entirely. [47:23] - **AI Will Personalize and Shorten Content Consumption**: AI's ability to understand individuals deeply allows for highly personalized and persuasive content, leading to shorter, more efficient consumption of information and entertainment, potentially diminishing traditional long-form media. [53:20], [56:57]

Topics Covered

  • Why is electricity the natural limit for AI growth?
  • Can "Mutual AI Malfunction" Prevent a Geopolitical Catastrophe?
  • Will Superintelligence Unify Beyond Human Capabilities Soon?
  • How will unregulated AI erode democracy and human values?
  • What is the ultimate moat for new AI companies?

Full Transcript

When do you see what you define as

digital super intelligence?

Uh, within 10 years.

The AI's ability to generate its own

scaffolding is imminent. Pretty much

sure that that will be a 2025 thing. We

certainly don't know what super

intelligence will deliver, but we know

it's coming.

And what do people need to know about

that?

You're going to have your own polymath.

So, you're going to have the sum of

Einstein and Leonardo da Vinci in the

equivalent of your pocket. agents are

going to happen. This math thing is

going to happen. The software thing is

going to happen. Everything I've talked

about is in the positive domain, but

there's a negative domain as well. It's

likely, in my opinion, that you're going

to see.

Now, that's a moonshot, ladies and

gentlemen.

Hey, everybody. Welcome to Moonshots.

I'm here live with my Moonshot mate,

Dave London. Uh we're here in our Santa

Monica studios and we have a special

guest today,

Eric Schmidt, the author of Genesis. We

talk about China. We're going to talk

about, you know, digital super

intelligence. We'll talk about, you

know, what people should be thinking

about over the 10 years.

And we're talking about the guy who has

more access to more more actionable

information than probably anyone else

you could think of. So, it should be

should be pretty exciting.

Incredibly brilliant. All right, stand

by for a conversation with the Eric

Schmidt, CEO or past CEO of Google and

an extraordinary investor and uh and

thinker in this field of AI.

Let's do it.

Eric, welcome back to Moonshots.

It's great to be here with you guys.

Thank you. It's been uh it's been a long

road since I first met you at Google. I

remember uh our first conversations were

fantastic. Uh it's been a crazy month in

the world of AI, but I think every month

from here is going to be a crazy month.

And so I'd love to hit on a number of

subjects and get your your take on them.

I want to start with probably the most

important point that you've made

recently that got a lot of traction, a

lot of attention, which is that AI is

underhyped when the rest of the world is

either confused, lost, or think it's,

you know, not impacting us.

We'll get into in more detail, but quick

most important point to make there.

AI is a learning machine. Yeah.

And in network effect businesses, when

the learning machine learns faster,

everything accelerates.

It accelerates to its natural limit. The

natural limit is electricity.

Not chips,

electricity really. Okay.

So that gets me to the next point here,

which is uh a discussion on AI and

energy. So, we saw recently was Meta

recently announcing uh that they signed

a 20-year nuclear contract with uh with

Constellation Energy. We've seen Google,

Microsoft, Amazon, everybody buying

basically nuclear capacity right now.

That's got to be weird

uh that private companies are are

basically taking over into their own

hands what was utility function before.

Um,

well, just to be cynical, I I'm so glad

those companies plan to be around the 20

years that it's going to take to get the

nuclear power plants built.

In my recent testimony, I talked about

the the current expected need for the AI

revolution in the United States is 92

gawatt of more power.

For reference, one gawatt is one big

nuclear power station. And there are

none essentially being started now.

And there have been two in the last

what, 30 years built. There is

excitement that there's an SMR, small

modular reactor coming in at 300

megawws, but it won't start till 2030.

As important as nuclear, both fision and

fusion is, they're not going to arrive

in time to get us what we need as a

globe to deal with our many problems and

the many opportunities that are before

us. Do you think uh so if if you look at

the sort of three-year timeline toward

AGI, do you think if you started a a

fusion reactor project today that won't

come online for five, six, seven years,

is there a probability that the AGI

comes up with some other breakthrough

fusion or otherwise that makes it

irrelevant before it even gets online?

A very good question. We don't know what

artificial general intelligence will

deliver. Yeah. And we certainly don't

know what super intelligence will

deliver, but we know it's coming.

So, first we need to plan for it. And

there's lots of issues as well as

opportunities for that. But the fact of

the matter is that the computing needs

that we name now are going to come from

traditional energy suppliers in places

like the United States and the Arab

world and Canada and the Western world.

And it's important to note that China

has lots of electricity. So if they get

the chips, it's going to be one heck of

a race.

Yeah. They've been scaling it uh you

know at two or three times. The US has

been flat for how long in terms of

energy production?

Um from my perspective uh infinite. In

fact,

electricity demand declined for a while

as has overall energy needs because of

conservation and other things.

But the data center story is the story

of the energy people, right? And you sit

there and you go, how could these data

centers use so much power? Well, and

especially when you think about how

little power our brains do. Well, these

are our best approximation in digital

form of how our brains work. But when

they start working together, they become

superbrains. The promise of a superbrain

with a 1 gawatt for example data center

is so palpable. People are going crazy.

And by the way, the economics of these

things are unproven. How much revenue do

you have to have to have 50 billion in

capital? Well, if you depreciate it over

three years or four years, you need to

have 10 or 15 billion dollars of capital

spend per year just to handle the

infrastructure. Those are huge

businesses and huge revenue, which in

most places is not there yet.

I'm curious, there's so much capital

being invested and deployed right now in

SMRs in in nuclear bringing Three Mile

Island back online. uh in in fusion

companies. Why isn't there an equal

amount of capital going into making uh

the entire you know chipset and compute

just a thousand times more energy

efficient?

There is a similar amount in going in

capital. There are many many startups

that are working on non-traditional ways

of doing chips. The transformer

architecture which is what is powering

things today has new variants. Every

week or so I get a pitch from a new

startup that's going to build inference

time, test time computing which are

simpler and they're optimized for

inference. It looks like the hardware

will arrive just as the software needs

expand.

And by the way, that's always been true.

We old-timers had a phrase um grove

giveth and gates take it away. So Intel

would improve the chipsets right way

back when

and the software people would

immediately use it all and suck it all

up. I have no reason to believe

that that's that that law grove and

gates law has changed. If you look at

the gains in like the Blackwell chip or

the AS uh the the 350 chip in AMD,

these chips are massive supercomputers

and yet we need according to the people

have hundreds of thousands of these

chips just to make a data center work.

That shows you the scale of what this

kind of thinking algorithms. Now you sit

there and you go what could these people

possibly be doing with all these chips?

I'll give you an example. We went from

language to language which is what

chatbd can be understood at to reasoning

and thinking. If you want to look at an

open eye example look at open oi03

which go does forward and back

reinforcement learning and planning.

Now the cost of doing the forward and

back is many orders of magnitude besides

just answering your question for your

PhD thesis or your college paper that

planning the back and forth is

computationally very very expensive. So

with the best energy and the best

technology today we are able to show

evidence of planning. Many people

believe that if you combine planning and

very deep memories you can build human

level intelligence. Now of course they

will be very expensive to start with but

humans are very very industrious and

furthermore the great future companies

will have AI scientists that is

non-human scientists AI programmers that

as opposed to human programmers who will

accelerate their impact. So, if you

think about it, going back to you're the

author of the abundance thesis, as best

I can tell, Peter, you've talked about

this for 20 years. You saw it first. It

sure looks like if we get enough

electricity, we can generate the power

in in the sense of intellectual power to

generate abundance along the lines that

you predicted two decades ago.

Every week, I study the 10 major tech

meta trends that will transform

industries over the decade ahead. I

cover trends ranging from humanoid

robots, AGI, quantum computing,

transport, energy, longevity, and more.

No fluff, only the important stuff that

matters, that impacts our lives and our

careers. If you want me to share these

with you, I write a newsletter twice a

week, sending it out as a short

two-minute read via email. And if you

want to discover the most important meta

trends 10 years before anyone else,

these reports are for you. Readers

include founders and CEOs from the

world's most disruptive companies and

entrepreneurs building the world's most

disruptive companies. It's not for you

if you don't want to be informed of

what's coming, why it matters, and how

you can benefit from it. To subscribe

for free, go to dmadis.com/tatrends.

That's dmandis.com/tatrends

to gain access to trends 10 plus years

before anyone else.

Let me throw some numbers at you just to

reinforce what you said. you know, we

have a couple companies in the lab that

are doing voice customer service, voice

sales with the new, you know, just as of

the last month.

Sure.

And the value of these these

conversations is 10 to $1,000. And the

cost of the compute is, you know, maybe

two three concurrent GPUs is optimal.

It's like 10 20 cents. And so they would

buy massively more compute to improve

the the quality of the conversation.

There aren't even close to enough. We we

count about 10 million concurrent phone

calls that should move to AI in the next

year or so.

And and my view of that is that's a good

tactical solution and a great business.

Let's look at other examples of tactical

solutions that are great businesses.

And I obviously have a conflict of

interest talking about Google because I

love it so much. So with that as in

mind, look at the Google strength in

GCP. Now Google Google's cloud product

where they have a completely fully

served enterprise offering for

essentially automating your company with

AI.

Yeah.

And the remarkable thing and this is to

me is shocking is you can in an

enterprise write the task that you want

and then using something called the

model context protocol you can connect

your databases to that and the large

language model can produce the code for

your enterprise. Now, there's 100,000

enterprise software companies,

middleware companies that grew up in the

last 30 years that I've been working on

this that are all now in trouble because

that that interstitial connection is no

longer needed

with their business

and and and of course they'll have to

change as well. The good news for them

is enterprises make these changes very

slowly. If you built a brand new

enterprise um architecture for ERP and

MRP, you would be highly tempted to not

use any of the ERP or MRP suppliers, but

instead use open- source libraries,

build essentially use BigQuery or the

equivalent from Amazon, which is Red

Redshift, and essentially build that

architecture and it gives you infinite

flexibility and the computer system

writes most of the code. Now,

programmers don't go away at the moment.

It's pretty clear that junior

programmers go away. The sort of

journeymen, if you will, of the

stereotype because these systems aren't

good enough yet to automatically write

all the code. They need very senior

computer scientists, computer engineers

who are watching it, that will

eventually go away.

One of the things to say about

productivity, and I call this the San

Francisco consensus because it's it's

largely the view of people who operate

in San Francisco,

goes something like this. uh we're just

about to the point where we can do two

things that are shocking. The first is

we can replace most programming tasks by

computers and we can replace both most

mathemat mathematical tasks by

computers.

Now you sit there and you go why? Well,

if you think about programming and math,

they have limited language sets compared

to human language. So close they're

simpler computationally

and they're scale free. You can just do

it and do it and do it with more

electricity. You don't need data. You

don't need real world input. You don't

need telemetry. You don't need sensors.

Yeah.

So, it's likely in my opinion that

you're going to see worldclass

mathematicians emerge in the next one

year that are AI based and worldclass

programmers that going to appear within

the next one or two years. When those

things are deployed at scale, remember

math and programming are the basis of

kind of everything, right? It's an

accelerate accelerant for physics,

chemistry, biology, material science.

So, going back to things like climate

change, can you imagine if we and this

goes back to your original argument,

Peter, imagine if we can accelerate the

discoveries of the new materials that

allow us to deal with a carbonized

world.

Yeah.

Right. It's very exciting. Can I love to

drill in about

you first?

I just want to hit this because it's

important the potential for there to be

I don't want to use the word PhD level

you know other than uh thinking in the

terms of research PhD level AIS and uh

that can basically attack any problem

and solve it uh and solve math if you

would in physics. uh this idea of an AI,

you know, intelligence explosion. Um Leo

Leopold put that at like 26 27

uh heading towards digital super

intelligence in the next few years. Do

you buy that time frame?

So again, I consider that to be the San

Francisco consensus. I think the dates

are probably off by one and a half or

two times,

which is pretty close. So a reasonable

prediction is that we're going to have

specialized soants in every field within

five years.

That's pretty much in the bag as far as

I'm concerned.

Sure.

And here's why. You have this amount of

humans and then you add a million AI

scientists to do something, your slope

goes like this. Your rate of

improvement, we should get there.

The real question is once you have all

these sants, do they unify?

Do they ultimately become a superhum?

The term we're using is super

intelligence, which implies intelligence

that beyond the sum of what humans can

do.

The race to super intelligence, which is

incredibly important because imagine

what a super intelligence could do that

we ourselves cannot imagine, right?

There it's so much smarter than we and

it has huge proliferation issues,

competitive issues, China versus the US

issues, electricity issues, so forth. We

don't even have the language for the

deterrence aspects and the proliferation

issues of these powerful models

or the imagination.

Totally agree. In fact, it's it's one of

the great flaws actually in the original

conception. You remember Singularity

University and Ray Curtzwhile's books

and everything. And we kind of drew this

curve of rat level intelligence, then

cat, then monkey, and then it hits human

and then it goes super intelligent. But

it's now really obvious when you talk to

one of these multilingual models that's

explaining physics to you that it's

already hugely super intelligent within

its savant category. And so Dennis keeps

redefining AGI day as well when it can

discover relativity the same way

Einstein did with data that was

available up until that date. That's

when we have AGI.

So long before that.

Yeah. So I think it's worth getting the

timeline right.

Yeah.

So the following things are baked in.

You're going to have an agentic

revolution where agents are connected to

solve business processes, government

processes and so forth. They will be

adopted most quickly in companies in

country companies that have a lot of

money and a lot of uh time latency

issues at stake. It will adop be adopted

most slowly in places like government

which do not have an incentive for

innovation. Um and fundamentally are job

programs and redistribution of income

kind of programs.

So call it what you will. The important

thing is that there will be a tip of the

spear in places like financial services,

certain kind of bio biomedical things,

startups and so forth. And that's the

place to watch. So all of that is going

to happen. The agents are going to

happen. This math thing is going to

happen. The software thing is going to

happen. We can debate the rate at which

the biological revolution will occur,

but everyone agrees that it's right

after that. We're very close to these

major biological understandings. Um in

physics you're limited by data but you

can generate it synthetically. There are

groups which I'm funding which are

generating physics um essentially um

models that can approximate algorithms

that cannot be they're incomputable. So

in other words you have a a essentially

a foundation model that can answer the

question good enough for the purposes of

doing physics without having to spend a

million years doing the computation of

you know quantum chromodnamics and

things like that. Yep.

Um, all of that's going to happen.

The next questions have to do with what

is the point in which this becomes a

national emergency

and it goes something like this.

Everything I've talked about is in the

positive domain, but there's a negative

domain as well. The ability for

biological attacks, um, uh, obviously

cyber attacks. Imagine a cyber attack

that we as humans cannot conceive of,

which means there's no defense for it

because no one ever thought about it.

Right? These are real issues. A

biological attack, you take a virus, I

won't obviously go into the details. You

take a virus that's bad and you make it

undetectable by some changes in its

structure, which again I won't go into

the details. We released a whole report

at the national level on this issue. So

at some point the government and not it

doesn't appear to understand this now is

going to have to say this is very big

because it affects national security,

national economic strengths and so

forth. Now China clearly understands

this and China is putting an enormous

amount of money into this. We have

slowed them down by virtue of our chips

controls but they found clever ways

around this. There are also

proliferation issues. Many of the chips

that they're not supposed to have, they

seem to be able to get. And more

importantly, as I mentioned, the

algorithms are changing. And instead of

having these expensive foundation models

by themselves, you have continuous

updating, which is called test time

training. That continuous updating

appears to be capable of being done with

lesser power chips. So, so we I there

are so many questions that I think we

don't know. We don't know the role of

open source because remember open source

means open weights, which means everyone

can use it. A fair reading of this is

that every country that's not in the

west will end up using open source

because they'll perceive it as cheaper

which trans transfers leadership in open

source from America to China. That's a

big deal, right? If that occurs.

Um how much longer do the chip bans if

you will hold and how long before China

can answer?

What are the effects of the current uh

government's policies of getting rid of

foreigners and foreign investment? what

happens with the Arab U data centers

assuming they work and I'm generally

supportive of them um if those things

are then misused uh to help train train

models. The list just goes on and on. We

just don't know. Okay. Can I ask you

probably one of the toughest questions?

I don't know if you saw Mark Andre

uh he went and talked to the Biden

administration past administration and

said how are we going to deal with

exactly what you just talked about

chemical and biological and radiological

and nuclear risks from big foundation

models being operated by foreign

countries. And the Biden answer was you

know we're going to keep it into the

three or four big companies like Google

and we'll just regulate them. And Mark

was like, "That is a surefire way to

lose the race with China because all

innovation comes from a startup that you

didn't anticipate or you know it's just

the American history and you're you're

cutting off the entrepreneur from

participating in this." So as of right

now with the open source models, the

entrepreneurs are in great shape. But if

you think about the models getting crazy

smart a year from now, how are we going

to have the the balance between startups

actually being able to work with the

best technology but proliferation not

percolating to every country in the

world.

Again, a set of unknown questions and

anybody who knows the answer to these

things is not telling the full truth.

Um the doctrine in the B administration

was called 10 to the 26 flops. It was a

point that was a consensus above which

the models were powerful enough to cause

some damage. So the theory was that if

you stayed below 10 the 26 you didn't

need to be regulated.

But if you were above that you needed to

be regulated. And the proposal in the

Biden administration was to regulate

both the open source and the closed

source.

Okay that's that's the those are the the

summary

that of course has been ended by the

Trump administration. um they have not

yet produced their own thinking in this

area. They're very concerned about China

and it getting forward. So, they'll come

out with something. From my perspective,

the the core questions are the

following. Will the Chinese be able to

use even with um chip restrictions, will

they use architectural changes that will

allow them to build models as powerful

as ours?

And let's assume they're government

funded. That's the first question. The

next fun question is how will you raise

$50 billion for your data center if your

product is open source?

Yeah.

In the American model, part of the

reason these models are closed is that

the business people and the lawyers

correctly are saying I've got to sell

this thing because I've got to pay for

my capital. These are not free goods.

And the US government correctly is not

giving $50 billion to these companies.

So we don't know that. Um the to me the

key question to watch is look at

Deepseek. So Deepseek um a week or so

ago Gemini 2.5 Pro got to the top of the

leaderboards in intelligence. Great

achievement for my friends at Gem at

Gemini. A week later deepseek comes in

and is slightly better than Gemini. and

Deeps of course is trained on the

existing hardware that's in China which

includes stuff that's been Pilford and

some of the Ascend it's called the

Ascend Huawei chips and a few others

what happens now the US people say well

you know the the deepseek people cheated

and they cheated by doing a technique

called distillation where you take a

large model and you ask it 10,000

questions you get its answers and then

then you use that as your training

material

yep

so the US companies will have to figure

out a way to make sure that their

proprietary information that they've

spent so much money on does not get

leaked into these open source things. Um

I just don't know with respect to uh

nuclear, biological, chemical and so

forth issues. Um the US companies are

doing a really good job of looking for

that. There's a great concern, for

example, that nuclear information would

leak into these models as they're

training without us knowing it. And by

the way, that's a violation of law.

Oh, really? they work and the whole

nuclear information thing is is there's

no free speech in that world for good

reasons

and there's no free use and copyright

and all that kind of stuff. It's illegal

to do it and so they're doing a really

really good job of making sure that that

does not happen. They also put in very

significant tests for biological

information and certain kinds of cyber

attacks. What happens there? Their

incentive is their incentive to continue

especially if it's not if it's not

required by law. The government has just

gotten rid of the the safety institutes

that were in place in Biden and are

replacing it by a new term which is

largely a safety assessment program

which is a fine answer. I think

collectively we in the industry just

want the government at the secret and

top secret level to have people who are

really studying what China and others

are doing. You can be sure that China

really has very smart people studying

what we're doing. We at the secret and

top secret level should have the same

thing.

Have you read the uh AI27 paper?

I have. Uh, and so for those listening

who haven't read it, it's a it's a

future vision of the AI and US and China

racing towards AI and at some point the

story splits into a we're going to slow

down and work on alignment or we're

going full out and uh, you know, spoiler

alert and the race to infinity uh,

humanity vanishes. So the right outcome

will ultimately be some form of

deterrence and mutually assured

destruction. Uh I wrote a paper with two

other authors Dan Hendricks and Alex

Wang where we named it mutual AI

malfunction.

And the idea was goes something like

this. Um you're the United States, I'm

China, you're ahead of me. Um at some

point you cross a line. You know, you

Peter cross a line and I China go this

is unacceptable.

At some point it becomes

in terms of amount of compute and amount

of

it's it's something you're doing where

it affects my sovereignty.

It's not just words and yelling and an

occasional shooting down a jet. It's

it's a real threat to the identity of my

my country, my economic what have you.

Under this scenario, I would be highly

tempted to do a cyber attack to slow you

down. Okay? In mutually assured mal

malfunction, if you will, we have to

engineer it so that you have the ability

to then do the same thing to me.

And that causes both of us to be careful

not to trigger the other.

That's what mutual assured destruction

is. That's our best formulation right

now. We also recommend in our work, and

I think it's very strong, that the

government require that we know where

all the chips are. And remember, the

chips can tell you where they are

because they're computers. Yeah.

And it would be easy to add a little

crypto thing, which would say, "Yeah,

here I am, and this is what I'm doing."

So, so knowing where the chips are,

knowing where the training runs are, and

knowing what these fault lines are are

very important. Now, there are a whole

bunch of assumptions in this scenario

that I described. The first is that

there was enough electricity. The second

is that there was enough power. The

third is the Chinese had enough

electricity, which they do, and enough

computing resources, which they may or

may not have

or may in the future have,

and may in the future have. And also,

I'm asserting that everyone arrives at

this eventual state of super

intelligence at a roughly the same time.

Again, these are debatable points, but

the most interesting scenario is we're

saying it's 1938. the letter has come,

you know, from Einstein to the president

and we're having a conversation and

we're saying,"Well, how does this end?"

Okay. So, if you were so brilliant in

38, what you would have said is this

ultimately ends with us having a bomb,

the other guys having a bomb, and then

we're going to have one heck of a

negotiation to try to make sure that we

don't end up destroying each other. And

I think the same conversation needs to

get started now, well before the

Chernobyl events, well before the

buildups.

Can I just take that one more step? And

and don't answer if you don't want to,

but if it was 1947, 1948,

so before the Cold War really took off,

and you say, well, that's similar to

where we are with China right now. We

have a competitive lead, but it may or

may not be fragile.

What would you do differently 1947 1940

or what would Kissinger do different

1947 1948 1949 than what we did do?

You know I I wrote two books with Dr.

Kissinger and I miss him very much. He

was my closest friend. Um and Henry was

very much a realist in the sense that

when you look at his history in uh

roughly 36 38 he and his uh I guess 37

38 his family were were Jewish were

forced to immigrate from uh Germany

because of the Nazis

and he watched the entire world that

he'd grown up with as a boy be destroyed

by the Nazis and by Hitler and then he

saw the confilgration that occurred as a

result and I tell you that whether you

like him or not, he spent the rest of

his life trying to prevent that from

happening again.

Mhm.

So we we are today safe because people

like Henry saw the world fall apart.

Mhm.

So I think from my perspective, we

should be very careful in our language

and our strategy to not start that

process. Henry's view on China was

different from other China scholars. His

view was in China was that we shouldn't

poke the bear, that we shouldn't talk

about Taiwan too much and we let China

deal with our own problems which were

very significant. But he was worried

that we or China in a small way would

start World War II in the same way that

World War I was started. You remember

that World War One one, World War I

started with a essentially a small

geopolitical event which was quickly

escalated for political reasons on on

all sides

and then the rest was a horrific war,

the war to end all wars at the time.

So we have to be very very careful when

we have these conversations not to

isolate each other. Um Henry started a

number of what are called track two

dialogues which I'm part of one of them

to try to make sure we're talking to

each other. And so somebody who's a a

hardcore person would say, well, you

know, we're Americans and we're better

and so forth. Well, I can tell you

having spent lots of time on this, the

Chinese are very smart, very care

capable, very much up here. And if

you're confused about that, again, look

at the arrival of Deep Seek. A year ago,

I said they were two years behind.

I was clearly wrong.

With enough money and enough power,

they're in the game.

Yeah. Let me actually drill in just a

little bit more on that too because I

think um one of the reasons deep sea

caught up so quickly is because it

turned out that inference time generates

a lot of IQ and I don't think anyone saw

that coming and inference time is a lot

easier to catch up on and also if you

take one of our big open source models

and distill it

and then make it a specialist like you

were saying a minute ago and then you

put a ton of infra time compute behind

it, it's a massive advantage and also a

ma massive leak of capability within

CBRN for example that nobody anticipated

and CBNN remember is chemical,

biological, radiological and nuclear.

Um

let me rephrase what you said.

If the structure of the world in 5 to 10

years is 10 models

and I'll make some numbers up. Five in

the United States, three in China, two

elsewhere. And those models are data

centers that are multi- gigawatts.

They will be all nationalized in some

way.

In China, they will be owned by the

government.

Mhm.

The stakes are too high.

Mhm. Um, one in my military work one day

I visited a place where we keep our

plutonium and we keep our plutonium in

in a base that's inside of another base

with even more machine guns and even

more specialized because the plutonium

is so is so interesting and and

obviously very dangerous and I believe

it's the only one or two facilities that

we have in America. So in that scenario,

these data centers will have the

equivalent of guards and machine guns

because they're so important.

Now is that a stable geopolitical

system? Absolutely. You know where they

are. President of one country can call

the other. They can have a conversation.

You know, they can agree on what they

agree on and so forth. But let's say the

it is not true. Let's say that the

technology improves again unknown to the

point where the kind of technologies

that I'm describing are implementable on

the equivalent of a small server

then you have a humongous

data center proliferation problem and

that's where the open-source issue is so

important because those servers which

will be proliferate throughout the world

will all be on open source. We have no

control regime for that. Now, I'm in

favor of open source as you mentioned

earlier with Mark Andre u uh that open

competition and so forth tends to allow

people to run ahead in defense of the

proprietary companies. Collectively,

they believe as best I can tell that the

open- source models can't scale fast

enough because they need this

heavyweight training. If you look, I

I'll give you an example of Grock is

trained on a single cluster that was

built by Nvidia in 20 days or so forth

in Memphis, Tennessee of 200,000 GPUs.

Um GPU is about $50,000. You can say

it's about a $10 billion supercomput in

one building that does one thing, right?

If that is the future, then we're okay

because we'll be able to know where they

are.

Yeah. If in fact the arrival of

intelligence is ultimately a a

distributed problem, then we're going to

have lots of problems with terrorism,

bad actors, North Korea poorly,

which is my which is my greatest

concern. Right. China and the US are

rational actors.

Yeah.

Uh the terrorist who has access to this

and I I don't want to go all negative on

this on this podcast. It's it's an

important thing to wake people up to the

deep thinking you've done on this. Um my

concern is is the terrorist who gains

access and

are we spending enough time and energy

and are we training enough models to

watch them.

So the first the companies are doing

this

there are there's a body of work

happening now which can be understood as

follows.

You have a super intelligent model. Can

you build a model that's not as smart as

the student that's studying? You know,

there is a professor that's watching the

student,

but the student is smarter than the

professor. Is it possible to watch what

it does? It appears that we can.

It appears that there's a way even if

you have a this rogue incredible thing,

we can watch it and understand what it's

doing and thereby control it. Another

example of the of where where we don't

know is that it's very clear that these

savant models will proceed. There's no

question about that.

The question is how do we get the

Einsteins?

So there are two possibilities.

One and this is to discover completely

new schools of thought

which is what's the most exciting thing.

Yeah. And in our book Genesis, Henry and

I and Craig talk about the importance of

polymaths in history. In fact, the first

chapter is on polymaths. What happens

when we have millions and millions of

polymaths? Very, very interesting.

Okay.

Now, it looks like the great

discoveries, the greatest scientists and

people in our history had the following

property. They were experts in something

and they looked at some at a different

problem and they saw a pattern

in one area of thinking that they could

apply to a completely unrelated field

and they were able to do so and make a

huge breakthrough. The models today are

not able to do that. So one thing to

watch for is algorithmically

when can they do that? This is generally

known as the non-stationerity problem.

Yeah. because uh the reward functions in

these models are fairly straightforward.

You know, beat the human, beat the

question and so forth. But when the

rules keep changing, is it possible to

say the old rule can be applied to a new

rule to discover something new?

And and again, the research is underway.

We won't know for years.

Peter and I were over at OpenAI

yesterday, actually, and we were talking

to many people, but Noan Brown in

particular, and um I said the word of

the year is scaffolding. And he said,

"Yeah, maybe the word of the month is

scaffolding." I was like, "Okay, what

did I step on there?" He said, "Look,

you know, right now, if you try to get

the AI to discover relativity or, you

know, just some green field opportunity,

it won't it won't do it. If you set up a

framework kind of like a lattice, like a

trellis, the vine will grow on the

trellis beautifully, but you have to lay

out those pathways and breadcrumbs." He

was saying the AI's ability to generate

its own scaffolding is imminent.

Mhm. That doesn't make it completely

self-improving. It's not it's not

Pandora's box, but it's also much deeper

down the path of create an entire

breakthrough in physics or create an

entire feature length movie or you know

these these prompts that require 20

hours of consecutive inference time

compute

pretty much sure that that will be a

2025 thing at least from from their

point of view.

So, uh, recursive self-improvement is

the general term for the computer

continuing to learn.

Yeah,

we've already crossed that

in the sense that these systems are now

running and learning things and they're

learning from the way they own they

think within limited functions.

When does the system have the ability to

generate its own objective and its own

question?

Does not have that today.

Yep. That's another sign. Another sign

would be that the system decides to uh

exfiltrate itself and it takes steps to

get it get itself away from the

commander the control and command

system. Um that has not happened yet.

Jim and I hasn't called you yet and

said, "Hi, Eric. Can I

but but there there are theoreticians

who believe that the that the systems

will ultimately choose that as a reward

function because they're programmed to,

you know, to continue to learn."

Uh, another one is access to weapons,

right? And lying to get it. So, these

are trip wires,

right? All of each of each of which is a

trip wire that we're we're watching.

And again, each of these could be the

beginning of a mini Chernobyl event that

would become part of consciousness.

I think at the moment the US government

is not focused on these issues. They're

focused on other things, economic

opportunity, growth, and so forth. It's

all good, but somebody's going to get

focused on this and somebody's going to

pay attention to it and it will

ultimately be a problem. A quick aside,

you probably heard me speaking about

fountain life before and you're probably

wishing, "Peter, would you please stop

talking about fountain life?" And the

answer is no, I won't. Because

genuinely, we're living through a

healthc care crisis. You may not know

this, but 70% of heart attacks have no

precedent, no pain, no shortness of

breath. And half of those people with a

heart attack never wake up. You don't

feel cancer until stage three or stage

4, until it's too late. But we have all

the technology required to detect and

prevent these diseases early at scale.

That's why a group of us including Tony

Robbins, Bill Cap, and Bob Heruri

founded Fountain Life, a one-stop center

to help people understand what's going

on inside their bodies before it's too

late and to gain access to the

therapeutics to give them decades of

extra health span. Learn more about

what's going on inside your body from

Fountainife. Go to fountainlife.com/per

and tell them Peter sent you. Okay, back

to the episode. Can I can I clean up one

kind of common misconception there

because um I think it's a really

important one. In the movie version of

AI, you described, hey, maybe there are

10 big AIs and five are in the US, three

are in China, and two are one's not in

Brussels, probably one's maybe in Dubai.

Um or, you know, Israel.

Israel. Okay, there you go.

Some somewhere like that.

Yeah. Um in the movie version of this,

if it goes rogue, you know, the SWAT

team comes in, they blow it up, and it's

it's solved. But the actual real world

is when you're using one of these huge

data centers to create an super

intelligent AI, the training process is

10 E26, 10 E28, you know, or more flops.

But then the final brain can be ported

and run on four GPUs, 8 GPUs, so a box

about this size.

Um, and it's just as intelligent, you

know, it's it's it's and that's one of

the beautiful things about it is you

This is called stealing the weights.

Stealing the weights. Exactly. And the

new new thing is that that weight file

with if you have an innovation in

inference time speed and you say oh same

weights no difference distill it or or

just quantize it or whatever but I made

it a 100 times faster now it's actually

far more intelligent than what you

exported from the data center and so the

but all of these are examples of the

proliferation problem

and I'm not convinced that we will hold

these things in the 10 places.

And and here's why. Let's assume you

have the 10, which is possible.

They will have subsets

of models that are smaller but nearly as

intelligent.

And so the tree of knowledge of systems

that have knowledge is not going to be

10 and then zero. It's going to be 10, a

h 100red, a thousand, a million, a

billion at different levels of

complexity. So the system that's on your

future phone may be, you know, three

orders of magnitude, four order

magnitude smaller than the one at the

very tippy top, but it will be very,

very powerful.

You know, to exactly what you're talking

about, there's some great research going

on at MIT. It'll probably move to

Stanford just to be fair but it always

does but uh it's great research going on

at MIT on uh if you have one of these

huge models and it's been trained on

movies it's been trained on Swahili a

lot of the parameters aren't useful for

this soant use case but the general

knowledge and intuition is so what's the

optimal balance between narrowing the

training data and narrowing the

parameter set to be a specialist without

losing general you know learning

so the people who opposed to that view

and again we don't know would say the

following. If you take a general purpose

model and you specialize it through

finetuning it also becomes more brittle.

Mhm. Mhm.

Their view is that what you do is you

just make bigger and bigger and bigger

models because they're in the big model

camp right and that's why they need

gigawatts of data centers and so forth.

And their argument is that that

flexibility of intelligence that we that

they are seeing will continue.

Dario wrote a a piece called um

basically about machines

and he argued that there

machines of of grace

machines of amazing grace

and he argued that there are three

scaling laws playing. The first one is

what you know of which is foundation

model growth. We're we're still on that.

The second one is a test time

training law and the third one is a

reinforcement learning training law.

Training laws are where if you just put

more hardware and more data, they just

get smarter in a in a predictable way.

Um, we're just at the beginning in his

view of uh this the second and third one

beginning. That's why I I'm sure our

audience would be frustrated. Why why do

we not know? I'm just we don't know,

right? It's too new. It's too powerful.

And at the moment, all of these

businesses are incredibly highly valued.

They're growing incredibly quickly. The

uses of them, I mentioned earlier, uh

going back to Google, um the ability to

refactor your entire workflow in a

business is a very big deal. That's a

lot of money to be made there for all

the companies involved. We will see.

Eric, shifting the topic. One of the

concerns that people have in the near

term and people have been, you know,

ringing the alarm bells is on jobs.

Um, I'm wondering where you come out on

this and flipping that forward to

education. How do we educate our kids

today in high school and college? Uh,

and what's your advice? So on the first

thing, do you believe that as Dario has

gone on uh you know TV shows now and

speaking to significant white collar job

loss, we're seeing obviously a multitude

of different drivers and uh robots

coming in. How do you think about the

job market over the next 5 years? Um

let's posit that in 30 or 40 years

there'll be a very different employment

robotic human interaction

or the definition of of do we need to

work at all

the definition of work the definition of

identity. Let's just posit that uh and

let's also posit that it will take 20 or

30 years for those things to work

through the economy of our world. Um,

now in California and other cities in

America, you can get on a Whimo taxi.

Um, Whimo, it's 2025. The original work

was done in the late '9s.

The original challenge at Stanford was

done, I believe, in 2004.

The DRA Grand Challenge. It was 2004.

20 Sebastian through one.

So, so more than 20 years from a visible

demonstration to our ability to use it

in daily life. Why? It's hard. It's deep

tech. It's regulated and all of that.

And I think that's going to be true,

especially in robots that are

interacting with humans. They're going

to get regulated. You're not going to be

wandering around and the robots going to

decide to slap you. It just doesn't, you

know, societyy's not going to allow that

sort of thing.

It's just not, it's not going to it's

it's not going to allow it.

So, in the shorter term, five or 10

years, I'm going to argue that this is

positive for jobs in the following way.

Okay.

Um if you look at the history of

automation and economic growth,

automation starts with the lowest status

and most dangerous jobs and then works

up the chain. So if you think about

assembly lines and cars and you know

furnaces and all these sort of very very

dangerous jobs that our four forefathers

did, they don't do them anymore. They're

done by robotic solutions of one another

and typically not a humanoid robot but

an arm. So the so the world dominated by

arms that are intelligent and so forth

will automate those functions. What

happens to the people? Well, it turns

out that the person who was working with

the the welder who's now operating the

arm has a higher

wage and the company has higher profits

because it's producing more widgets. So

the company makes more money and the

person makes more money, right? In that

sense. Now you sit there and say well

that's not true because humans don't

want to be retrained. Ah but in the

vision that we're talking about every

single person will have a human a

computer assistant that's very

intelligent that helps them perform.

And you take a person of normal

intelligence or knowledge and you add a

you know sort of accelerant they can get

a higher paying job. So you sit there

and you go well why are there more jobs?

There should be less jobs. That's not

how economics works. Economics expands

because the opportunities expands,

profits expands, wealth expands and so

forth. So there's plenty of dislocation

but in aggregate are there more people

employed or fewer? The answer is more

people with higher paying jobs.

Is that true in India as well?

Uh it will be and you picked India

because India has a positive demographic

outlook although their their birth rate

is now down to 2.0.

Huh. That's good. the the the rest of

the world is choosing not to have

children.

If you look at Korea, it's now down to.7

children per two parents.

Yeah.

China is down to one child per two

parents.

It's evaporating.

Now, what happens in those situations?

They completely automate everything

because it's the only way to increase

national priority. So the most likely

scenario, at least in the next decade,

is it's a national emergency to use more

AI in the workplace to give people

better paying jobs and create more

productivity in the United States

because our birth rate has been falling.

And and what happens is people have

talked about this for 20 years. If you

if you have this conversation and you

ignore demographics, which is negative

for humans, and economic growth, which

occurs naturally because of capital

investment, then you miss the whole

story. Now, there are plenty of people

who lose their jobs, but there's an

awful lot of people who have new jobs.

And the typical simple example would be

all those people who work in in Amazon

distribution centers and Amazon trucks,

those jobs didn't exist until Amazon was

created, right? Um the number one

shortage in jobs right now in America

are truck drivers. Why? Truck driving is

a lonely, hard, lowpaying, right? low

status of good people job. They don't

want it. They want a better paying job.

Right? Going back to education,

it's really a crime that our industry

has not invented the following product.

The product that I wanted to build is a

product that teaches every single human

who wants to be taught in their language

in a gamified way the stuff they need to

know to be a great citizen in their

country.

Right? That can all be done on phones

now. It can all be learned and you can

all learn how to do it. And why do we

not have that product? Right? The

investment in the humans of the world is

the best return always in knowledge in

capability is always the right answer.

Let me try and get get your opinion on

this because you're so influential with

so I've got about a thousand people in

the companies where I'm the controlling

shareholder and I've been trying to tell

them exactly what you just articulated

where a lot of these people have been in

the company for 10 15 years. They're

incredibly capable and loyal, but

they've learned a specific white collar

skill. They worked really hard to learn

the skill and the AI is coming within no

no more than 3 years and maybe two

years. And the the opportunity to

retrain and have continuity is right

now.

But if they delay, which everyone seems

to be just let's wait and see. And what

I'm trying to tell them is if you wait

and see, you're you're really screwing

over that employee. So, so we are in

wild agreement that this is going to

happen and the winners we the ones who

act. Now, what's interesting is when you

look at innovation history, the biggest

companies who you would think of are the

slowest because they have economic

resources that the little companies

typically don't, they tend to eventually

get there, right? So, watch what the big

companies do. Mhm.

are their CFOs and the people who

measure things carefully, who are very

very intelligent. They say, "I'm done

with that thousand engineering team that

doesn't do very much. I want 50 people

working in this other way and we'll do

something else for the other people."

And when you say big companies, we're

thinking Google, Meta. We're not

thinking, you know, big bank hasn't done

anything.

I'm thinking about big banks. Um when

when I talk to CEOs and I know a lot of

them in traditional industries, what I

counsel them is you already have people

in the company who know what to do. You

just don't know who they are.

So call a review of the best ideas to

apply AI in our business and ine

inevitably the first ones are boring.

Improve customer service, improve call

centers and so forth. But then somebody

says, you know, we could increase

revenue if we built this product. I'll

give you another example. There's this

whole industry of people who work on

regulated user interfaces or one

another. I think user interfaces are

largely going to go away because if you

think about it, the agents speak English

typically or other languages. You can

talk to them. You can say what you want.

The UI can be generated. So I can say

generate me a set of buttons that allows

me to solve this problem and it's

generated for you. Why do I have to be

stuck in what is called the WIMP

interface Windows icons menus and

pulld down that was invented in Xerox

Park, right, 50 years ago? Why am I

still stuck in that paradigm? I just

want it to work.

Yeah.

Kids in high school and college now, any

different recommendations for where they

go? When you spend any time in a high

school or I was at a conference

yesterday where we had a drone challenge

and you watch the 15 year olds, they're

going to be fine.

They're just going to be fine. It all

makes sense to them and we're in their

way.

Um, if I were

digital natives,

but they're more than digital natives.

They get it. They understand the speed.

It's natural to them. They're also,

frankly, faster and smarter than we are,

right? That's just how life works, I'm

sorry to say. So we have wisdom, they

have intelligence, they win, right? So

in their case,

I used to think the right answer was to

go into biology. I now actually think

going into the application of

intelligence to whatever you're

interested in is the best thing you can

do as a young person.

Purpose driven.

Yeah.

Any form of solution that you find

interesting. Most uh most kids get into

it for gaming reasons or something and

they learn how to program very young. So

they're quite familiar with this. Um I

work uh at a particular university with

undergraduates and they're already doing

different different algorithms for

reinforcement learning as sophomores.

This shows you how fast this is

happening at their level. They're going

to be just fine.

They're responding to the economic

signals, but they're also responding to

their purpose. Right? So, an example

would be you care about climate, which I

certainly do. If you're a young person,

why don't you figure out a way to

simplify the climate science to use

simple foundation models to answer these

core questions?

Yeah.

Why don't you figure out a way to use

these powerful models to come up with

new materials, right, that allow us

again to address the carbon challenge?

And why don't you work on energy systems

to have better and more efficient energy

sources that are not that less carbon?

You see my point? Yeah,

you know, I've noticed uh because I have

kids exactly that that era and um

there's a very clear step function

change largely attributable I think to

Google and Apple that they have the

assumption that things will work

and if you go just a couple years older

during the wimp era like you described

it which I'll attribute more to

Microsoft the assumption is nothing will

ever work like if I try to use this

thing it's going to crash I'm going to

be also interesting was that in my

career I used to give these speeches

about the internet which I enjoyed

uh where I said, you know, the great

thing about the internet is it has

there's an off button and you can turn

off your odd button and you can actually

have dinner with your family and then

you can turn it on after dinner. This is

no longer possible. So the divi the

distinction between the real world and

the digital world has become confusing.

But no one none of us are offline for

any significant period of time.

Yeah. And indeed the the reward system

in the world has now caused us to not

even be able to fly in peace. Yeah.

Right. Drive in peace, take a train in

peace.

Star link is everywhere.

Right. And and that that ubiquitous

connectivity has some negative impact in

terms of psychological stress uh loss of

emotional physical health and so forth.

But the benefit of that productivity is

without question.

Every day I get the strangest

compliment. Someone will stop me and

say, "Peter, you have such nice skin."

Honestly, I never thought I'd hear that

from anyone. And honestly, I can't take

the full credit. All I do is use

something called OneSkin OS1 twice a day

every day. The company is built by four

brilliant PhD women who've identified a

peptide that effectively reverses the

age of your skin. I love it. And again,

I use this twice a day, every day. You

can go to onkin.co and write peter at

checkout for a discount on the same

product I use. That's oneskin.co co and

use the code Peter at checkout. All

right, back to the episode.

Google IO was amazing.

I mean, just hats off to the entire team

there. Um, V3 was shocking and we're

we're sitting here 8 miles from

Hollywood

and I'm just wondering your thoughts on

the impact this will have. you know, we

going to see the oneperson film, feature

film like we're seeing potentially

oneperson uh unicorns in the future with

a with aic. Are we going to see uh an

individual be able to compete with a

Hollywood studio? And should they be

worried about their assets?

Well, they should always be worried

because of intellectual property issues

and so forth. Um, I think blockbusters

are likely to still be put together by

people with an awful lot of help from by

AI. Mhm.

Um I don't think that goes away. Um if

you look at what we can do with

generating long- form video, it's very

expensive to do long-term video,

although that will come down. And also

there's an occasional extra leg or extra

clock or whatever. It's not perfect yet.

And that requires human editing. So even

in the scenario where a lot of the the

video is created by by a computer, there

going to be humans that are producing it

and directing it for reasons. My best

example in Hollywood is that let's let's

use the example and I was at at a studio

where they were showing me this.

They had they happened to have an actor

who was recreating William Sha Shatner's

movies uh movements a young man and they

had licensed the likeness from you know

William Shatner who's now older and they

put his head on this person's body and

it was seamless. Well that's pretty

impressive. That's more revenue for

everyone. The an unknown actor becomes a

bit more famous, Mr. Shatner gets more

revenue, they the whole the whole movie

genre works. That's a good thing.

Another example is that nowadays they

use green screens rather than sets. And

furthermore, in the alien department,

when you have, you know, scary movies,

instead of having the makeup person,

they just add the makeup digitally.

So, who wins? The costs are lower. the

movies are made quicker. In theory, the

movies are better, right? Because you

have more choices. Um, so everybody

wins. Who loses? Well, there was

somebody who built that set

and that set isn't needed anymore.

That's a carpenter and a very talented

person who now has to go get a job in

the carpentry business. So again, I

think people get confused. If I look at

at if I look at the digital

transformation of entertainment subject

to intellectual property being held,

which is always a question, it's going

to be just fine,

right? There's still going to be

blockbusters. The cost will go down, not

up, or the or the relative income

because in Hollywood, they essentially

have their own accounting and they

essentially allocate all the revenue to

all the key producing people. The the

allocation will shift to the people who

are the most creative. That's a normal

process. Remember we said earlier that

automation gets rid of the poor the

lowest quality jobs, the most dangerous

jobs there. The jobs that are sort of

straightforward are probably automated,

but they're really creative jobs. Um,

another example, the script writers.

You're still going to have script

writers, but they're going to have an

awful lot of help from AI to write even

better scripts. That's not bad.

Okay. I saw a study recently out of

Stanford that documented AI being much

more persuasive than the best humans.

Yes.

Uh that set off some alarms. It also set

off some interesting thoughts on the

future of advertising.

Any particular thoughts about that?

So we know the following. We know that

if the system knows you well enough, it

can learn to convince you of anything.

Mhm. So what that means in an

unregulated environment is that the

systems will know you better and better.

They'll get better at pitching you and

if you're not savvy, if you're not

smart, you could be easily manipulated.

We also know that the computer is better

than humans trying to do the same thing.

So none of this surprises me. The real

question and I'll ask this in as a

question is in the presence of

unregulated misinformation engines of

which there will be many advertisers

uh politicians just criminal people

people trying to evade responsibility.

There's all sorts of people who have

free speech. When they have free speech

which includes the ability to use

misinformation to their advantage, what

happens to democracy? Yeah,

we we've all grown up in democracies

where there's a sort of a a consensus

around trust and there's an elite that

more or less administers the trust

vectors and so forth. There's a set of

shared values. Do those shared values go

away? In our book about Genesis, we talk

about this as a deeper problem. What

does it mean to be human when you're

interacting mostly with these digital

things,

especially if the digital things have

their own scenarios? My favorite example

is that uh you have a son or a grandson

or a child or a grandchild and you give

them a bear and the bear has a

personality and the child grows up but

the bear grows up too.

So who regulates what the bear talks to

the kid? Most people haven't actually

experienced the super super empathetic

voice that can be any inflection you

want. When they see that which will be

in the next probably two months.

Yeah. they're going to completely open

their eyes to what this

Well, remember that voice casting was

solved a few years ago and that you can

cast

anyone else's voice onto your own.

Yeah.

And that has all sorts of problems.

Have you seen uh an avatar yet of

somebody that you love that's passed

away or or Henry Kissinger or anything

is that?

Well, we created we actually created one

with the permission of his family.

Did you start crying instantly?

Uh it's very emotional. It's very

emotional because, you know, it brings

back I mean it's it's a real human,

you know, it's a real memory, a real

voice. Um, and I think we're going to

see more of that. Now, one obvious thing

that will happen is at some point in the

future when when we naturally die, our

digital essence will live in the cloud.

Yeah.

And it will know what we knew at the

time and you can ask it a question.

Yeah.

So, can you imagine asking Einstein,

going back to Einstein,

what did you really think about,

you know, this other guy,

you know, did you actually like him or

were you just being polite with him with

letters?

Yeah.

Right. Um, and in all those sort of

famous contests that we study as

students,

can you imagine be able to ask the, you

know, the people

Yeah.

Today, you know, with today's

retrospective, what did you really

think? I know that the education example

you gave earlier is so much more

compelling when you're talking to Isaac

Newton or Albert Einstein instead of

just a

but you know it's so it's so

this is coming back to the V3 in the

movies when the one of the first

companies we incubated out of MIT course

advisor we sold it to Don Graham and the

Washington Post and then so I was

working for him for a year after that

and the conception was here's the

internet here's the newspaper let's move

the newspaper onto the internet we'll

call it washingtonost.com

and if you look hit where it ended up,

you know, today with Meta, Tik Tok,

YouTube didn't end up anything like the

newspaper moves to the internet.

So now here's V3, here are movies. You

can definitely make a long form movie

much more

cheaply. But I just had this experience

of somebody that I know is a complete

this director will try and make a

tearjerker by leading me down a two-hour

long path. But I can get you to that

same emotional state in about five

minutes if it's personalized to you.

Well, one of the things that's happened

because of the addictive nature of the

internet is we've lost um sort of the

deep state of reading.

Mhm.

So, I was walking around and I saw a

Borders, sorry, a Barnes & Noble

bookstore. Big, oh my god, my old home

is back and I went in and I felt good.

But it's a very fond memory. But the

fact of the matter is that people's

attention spans are shorter.

They consume things quicker. One of the

things interesting about sports is the

sports highlights business is a huge

business. Licensed clips around

highlights because it's more efficient

than watching the whole game.

So, I suspect that if you're with your

buddies and you want to have be drinking

and so forth, you put the game on,

that's fine. But if you're a busy person

and you're busy with whatever you're

busy of and you want to know what

happened with your favorite team, the

highlights are good enough.

Yeah. You have four panes of it going at

the same time, too.

And so, this is again a change and it's

it's a more fundamental change to

attention. Mhm.

I've been work I work with a lot of

20somes in research

and one of the questions I had is how do

they do research in the presence of all

of these stimulations and I can answer

the question definitively. They turn off

their phone.

Yeah.

You can't think deeply as a researcher

with this thing buzzing. And remember

that that part of the the industry's

goal was to fully monetize your

attention.

Yeah.

Right. We we essent aside from sleeping

and we're working on having you have

less sleep I I guess from stress we've

essentially tried to monetize all of

your waking hours with something some

form of ads some form of entertainment

some form of subscription that is

completely antithetical to the way

humans traditionally work with respect

to long thoughtful examination of

principles the time that it takes to be

a good human being these are in conflict

right now there are various attempts at

this. So, you know, my favorite are

these digital apps that make you relax.

Okay. So, the correct thing to do to

relax is to turn off your phone, right?

And then relax in a traditional way for,

you know, 70,000 human years of

existence.

Yeah. Yeah. I had an incredible

experience. I'm doing the flight from

MIT to Stanford all the time.

And, you know, like you said, attention

spans are getting shorter and shorter

and shorter. The Tik Tok extreme, you

know, the clips are so short. This

particular flight was my first time

brainstorming with Gemini for six hours

straight

and I completely lost track of time and

I was we're I'm trying to figure out

it's a circuit design and chip design

for inference time compute and it's so

good at brainstorming with me and

bringing back data and so long as the

Wi-Fi on the plane is working.

Time went by. So my first experience

with technology that went the other

direction

but noticed that you also were not

responding to texts and annoyances. You

weren't reading ads. you were deep

inside of a system

which for which you paid a subscription.

Mhm.

So if you look at the deep research

stuff, one of the questions I have when

you do a deep research analysis, I was

looking at factory automation for

something. Where is the boundary of

factory automation versus human

automation? It's some an area I don't

understand very well. It's very very

deep technical set of problems. I didn't

understand it.

It took 20 12 minutes or so to generate

this paper. 12 minutes of these

supercomputers is an enormous amount of

time. What is it doing? Right. And the

answer, of course, the product is

fantastic.

Yeah. You know, to Peter's question

earlier, too, I keep the Google IPO

perspectus in my bathroom up in Vermont.

It's 2004. I've read it probably 500

times. But I don't know if you remember.

It's getting a little ratty actually.

You're the only the only person besides

me who did the same.

I read it 500 times because I had to. It

was. It was legally legally required.

Well, I still read it um because because

of the misconceptions, it's just so it's

such a great learning experience. But

even before the IPO, if you think back,

you know, there's this big debate about

will it be ad revenue, will it be

subscription revenue, will it be paid

inclusion, will the ads be visible, and

all this confusion about how you're

going to make money with this thing.

Now, the internet moved to almost

entirely ad revenue. But if you look at

the AI models, they're, you know, you

got your $20 now $200 subscription and

people are signing up like crazy. So,

you know, the it's ultra ultra

convincing. Is that going to be a form

of ad revenue where it convinces you to

buy something or no? Is it going to be

subscription revenue where people pay a

lot more and there's no advertising at

all?

No, but you have you have this with

Netflix. There was this whole discussion

about would would how would you fund

movies through ads? And the answer is

you don't. You have a subscription. And

the Netflix p people looked at having

free movies without a subscription and

advertising supported and the math

didn't work. So I think both will be

tried. I think the fact of the matter is

deep research at least at the moment is

going to be chosen by wellto-do or

professional tasks.

You are capable of spending that $200 a

month. A lot of people don't afford

cannot afford it.

And that free service remember is the

thing that is the stepping stone for

that young person man or woman who just

needs that access. My favorite story

there is that when I when I was at

Google and I went to Kenya and Kenya is

a great country and I and I was with

this computer science professor and he

said, "I love Google." I said, "Well, I

love Google, too." And he goes, "Well, I

really love Google." I said, "I really

love Google, too." And I said, "Why do

you really love Google?" He said,

"Because we don't have textbooks."

And I thought, "The top computer science

program in the nation does not have

textbooks."

Yeah. Well, let me uh

let me jump in a couple things here. Uh

Eric in in the next few years what moes

actually exist for startups as AI is

coming in and disrupting uh

do you have a list?

Yes, I I'll give you a simple answer.

And what do you look for in the

companies that you're investing in?

So first in the deep tech hardware stuff

there's going to be patents, patents,

filings, inventions, you know the hard

stuff. Those things are much slower than

the software industry in terms of growth

and they're just as important. You know,

power systems, all those robotic systems

we've been waiting for a long time.

They're just it's just slower for all

sorts of hardware is hard.

Hardware is hard for those reasons.

In software, it's pretty clear to me

it's going to be really simple. These

software is typically a network effect

business where the fastest mover wins.

The fastest mover is the fastest learner

in an AI system. So what I look for is a

is a a company where they have a loop.

Ideally, they have a couple of learning

loops. So I'll give you a simple

learning loop that as you get more

people, the more people click and you

learn from their click. They they they

express their preferences. So let's say

I invent a whole new consumer thing,

which I don't have an idea right now for

it, but imagine I did. And furthermore,

I said that I don't know anything about

how consumers behave, but I'm going to

launch this thing. The moment people

start using it, I'm going to learn from

them, and I'll have instantaneous

learning to get smarter about what they

want. So, I start from nothing. If my

learning slope is this, I'm essentially

unstoppable.

I'm unstoppable because I'm my learning

advantage by the time my competitor

figures out what I've done is too great.

Yeah.

Now, how close can my my competitor be

and still lose? The answer is a few

months.

Mhm.

Because the slopes are exponential.

Mhm.

And so, it's likely to me that there

will be another 10 fantastic Google

scale meta-cale companies. They'll all

be founded on this principle of learning

loops. And when I say learning loops, I

mean in the core product, solving the

current problem as fast you can. If you

cannot define the learning loop, you're

going to be beaten by a company that can

define it.

And you said 10 meta Googlesized

companies. Do you think they'll there

will also be a thousand like if you look

at the enterprise software business the

you know Oracle on down peopleoft

whatever thousands of those or will they

all consolidate into those 10 that are

domain dominant learning loop companies?

Um, I think I'm largely speaking about

consumer scale because that's where the

real growth is.

The problem with learning loops is if

your customer is not ready for you, you

can only learn at a certain rate.

So, it's probably the case that the

government is not interested in learning

and therefore there's no growth in

learning loop serving the government.

I'm sorry to say that needs to get

fixed.

Yeah.

Um, educational systems are largely

regulated and run by the unions and so

forth. they're not interested in

innovation. They're not going to be

doing any learning. I'm sorry to say we

have to get that has to get fixed. So

the ones where there's a very fast

feedback signal are the ones to watch.

Another example, uh it's pretty obvious

that you can build a whole new stock

trading company where you learn if you

get the algorithms right, you learn

faster than everyone else and scale

matters. So in the presence of scale and

fast learning loops, that's the moat.

Now I don't know that there's many

others there. You do have

you think brand would be a mode?

Uh brand matters but less so. What's

interesting is people seem to be

perfectly willing now to move from one

thing to the other in at least in the

digital world.

And there's a whole new set of brands

that have emerged that everyone is using

that are you know the next generations

that I haven't even heard of.

With within those learning loops you

think domain specific synthetic data is

a is a big advantage? Well, the answer

is whatever it causes faster learning.

There are applications where you have

enough training data from humans. There

are applications where you have to

generate the training data from what the

humans are doing.

Right? So, you could imagine a situation

where you had a learning loop where

there's no humans involved where it's

monitoring something, some sensors, but

because you learn faster on those

sensors, you get so smart, you can't be

replaced by another sensor management

company. That's the way to think about.

So, so what about the the capital for

the learning loop? Like because um do

you know Danielle Roose who runs CE? So

Danielle and I are really good friends.

We've been talking to our governor Mora

Healey who's one of the best governors

in the world.

I agree.

So there's a problem in our academic

systems where the big companies have all

the hardware because they have all the

money and the universities do not have

the money for even reasonablesiz data

centers. I was with one university where

after lot lots of meetings they agreed

to spend $50 million on a data center

which generates less than a thousand

GPUs

right for the entire campus and all the

research.

Yeah.

And that doesn't even include the

terabytes of storage and so forth. So I

and others are working on this as a

philanthropic matter. The government is

going to have to come in with more money

for universities for this kind of stuff.

That is among the best investment. When

I was young, I was on a National Science

Foundation scholarship for and by the

way, I made $15,000 a year. Uh the

return to the nation of my that $15,000

has been very good, shall we say, based

on the taxes that I pay and the jobs

that we have created.

So core question. So glad you

so so creating so creating an ecosystem

for the next generation to have the

access to the systems is important. It's

not obvious to me that they need

billions of dollars.

It's pretty obvious to me that they need

a million dollars, $2 million. Yeah,

that's the goal.

Yeah.

I want to I want to take a I want to

take us in a direction of uh of uh

wrapping up on super intelligence and

the book.

Um,

we didn't finish the timeline on super

intelligence and I think it's important

to give people a sense of how quickly

the self-reerential learning can get and

how rapidly we can get to something, you

know, a thousand times, a million, a

billion times more capable than a human.

On the flip side of that, Eric, when I

look at my greatest concerns when we get

through this 5 to sevenyear period of

uh let's just say rogue actors and

stabilization and such. Uh one of the

biggest concerns I have is the

diminishment of human purpose. Mhm.

Um, you know, you wrote uh in the book

uh and I've listened to it uh haven't

read it physically and my kids say you

don't read anymore.

You you listen to books you don't read.

But um you said the real risk is not

terminator, it's drift. Um you argue

that AI won't destroy human uh humanity

violently, but might slowly erode human

values, autonomy, and judgment if left

unregulated misunderstood.

So it's really a Wall-E like future

versus a a Star Trek boldly go out

there.

We're very in the book and my own

personal view is it's very important

that human agency be protected.

Yeah.

Human agency means the ability to get up

in the day and do what you want subject

to the law. Right. And it's perfectly

possible that these digital devices can

create a form of a virtual prison where

you don't feel that you as a human can

do what you want. Right? That is to be

avoided. I I'm I'm not worried about

that case. I'm more worried about the

case that if you want to do something,

it's just so much easier to ask your

robot or your AI to do it for you. The

the human spirit that wants to overcome

a challenge. I mean the unchallenged

life is so going to so critical

but but there will be always new

challenges. Uh when I was a boy uh one

of the things that I did is I would

repair my father's car

right I don't do that anymore. When I

was a boy I used to mow the lawn. I

don't do that anymore.

Sure.

Right. So there are plenty of examples

of things that we used to do that we

don't need to do anymore. But there'll

be plenty of things. Just remember the

complexity of the world that I'm

describing is not a simple world. Just

managing the world around you is going

to be a full-time and purposeful job.

Partly because there will be so many

people fighting for misinformation and

for your attention and and there's

obviously lots of competition and so

forth. There's lots of things to worry

about. Plus, you have all of the people,

you know, trying to get your trying to

get your your money, create

opportunities, deceive you, what have

you. So, I think human purpose will

remain because humans need purpose.

That's the point. And you know there's

lots of literature that the people who

have what we would consider to be

lowpaying worthless jobs enjoy going to

work. So the challenge is not to get rid

of their job. It's to make their job

more productive using AI tools. They're

still going to go to work. And I to be

very clear this notion that we're all

going to be sitting around doing poetry

is not happening. Right? In the future

there'll be lawyers. They'll use tools

to have even more complex lawsuits

against each other, right? There will be

evil people who will use these tools to

create even more evil problems. There

will be good people who will be trying

to deter the evil people. The tools

change, but the structure of humanity,

the way we work together is not going to

change.

Peter and I were on Mike Sailor's yacht

a couple months ago, and I was

complaining that the curriculum is

completely broken in all these schools.

But what I meant was we should be

teaching AI. And he said, "Yeah, they

should be teaching aesthetics." And I

looked at him, I'm like, "What the hell

are you talking about?" He said, "No, in

the age of AI, which is imminent, look

at everything around you, whether it's

good or bad, enjoyable, not enjoyable,

it's all about designing aesthetics."

When the AI is such a force multiplier

that you can create virtually anything,

what what are you creating and why? And

that becomes the challenge.

If you look at Vickinstein and the sort

of theories of all of this stuff, it is

all fundament we're having a

conversation that America has about

tasks and outcomes. It's our culture.

But there are other aspects of human

life meaning thinking reasoning.

We're not going to stop doing that.

So imagine if your purpose in life in

the future is to figure out what's going

on and to be successful, just figuring

that out is sufficient. Because once you

figured it out, it's taken care of for

you.

That's beautiful,

right? That provides purpose.

Yeah.

Um it's pretty clear that robots will

take over an awful lot of mechanical or

manual work.

Um and for people who like to, you know,

I like to repair the car. I don't do it

anymore. I miss it,

but I I have other things to do with my

time.

Yeah.

Take me forward. When do you see uh what

you define as digital super

intelligence?

Uh within 10 years.

Within 10 years. And what do people need

to know about that?

What do people need to understand and

sort of uh prepare themselves for either

from as a parent or as a employee or as

a CEO?

One way to think about it is that when

digital super intelligence finally

arrives and is generally available and

generally safe, you're going to have

your own polymath.

So you're going to have the sum of

Einstein and Leonardo da Vinci in the

equivalent of your pocket. I think

thinking about how you would use that

gift is interesting. And of course evil

people will become more evil, but the

vast majority of people are good. Yes,

they're well-meaning, right? So going

back to your abundance argument, there

are people who've studied the the n the

notion of productivity increases and

they believe that you can get we'll see

to 30% year-over-year economic growth

through abundance and so forth. That's a

very wealthy world. That's a world of

much less disease, many more choices,

much more fun if you will, right? Just

taking all those poor people and lifting

them out of the daily struggle they

have. That is a great human goal. That's

focus on that. That's the goal we should

have. Does GDP still have meaning in

that world?

If you include services, it does. Um,

one of the things about manufacturing

and and everyone's focused on trade

deficits and they don't understand the

vast majority of modern economies are

service economies, not manufacturing

economies. And if you look at the

percentage of farming, it was roughly

98% to roughly 2 or 3% in America over a

hundred years. If you look at

manufacturing, the heydays in the 30s

and 40s and 50s, those percentages are

now down. Well, lower than 10%. It's not

because we don't buy stuff. It's because

the stuff is automat automated. You need

fewer people. Those there's plenty of

people working in other jobs. So again,

look at the totality of the society. Is

it healthy?

If you look in China, it's easy to

complain about them. Um they have now

deflation. They have a term where people

are it's called laying down where they

lay they they stay at home. They don't

participate in the workforce, which is

counter to their traditional culture. If

you look at reproduction rates, these

countries that are essentially having no

children, that's not a good thing.

Yeah.

Right. Those are problems that we're

going to face. Those are the new

problems of the age.

I love that.

Eric, uh, so grateful for your time.

Thank you. Thank you both. Um, I I love

your show.

Yeah. Thank you, buddy.

Thank you.

Okay. Thank you, guys. If you could have

had a 10-year head start on the dot boom

back in the 2000s, would you have taken

it? Every week, I track the major tech

meta trends. These are massive

game-changing shifts that will play out

over the decade ahead. From humanoid

robotics to AGI, quantum computing,

energy breakthroughs, and longevity. I

cut through the noise and deliver only

what matters to our lives and our

careers. I send out a Metatron

newsletter twice a week as a quick

two-minute readover email. It's entirely

free. These insights are read by

founders, CEOs, and investors behind

some of the world's most disruptive

companies. Why? Because acting early is

everything. This is for you if you want

to see the future before it arrives and

profit from it. Sign up at

dmandis.com/atrends

and be ahead of the next tech bubble.

That's dmmand.com/metats.

[Music]

Loading...

Loading video analysis...