LongCut logo

A.I., Mars and Immortality: Are We Dreaming Big Enough? | Interesting Times with Ross Douthat

By Interesting Times with Ross Douthat

Summary

## Key takeaways - **Technological stagnation persists in 2025**: Peter Thiel argues that despite advancements in digital technology like AI, the overall pace of innovation has slowed significantly since the mid-20th century, leading to a generalized sense of stagnation. [01:27], [04:16] - **Growth and dynamism are essential for society**: Thiel contends that societal institutions, particularly the middle class which relies on the expectation of upward mobility, are predicated on growth. Without it, society risks unraveling. [08:10], [08:25] - **More risk-taking needed in science and medicine**: Thiel advocates for a greater willingness to take risks, citing the lack of progress in areas like Alzheimer's research as evidence that current approaches are insufficient and need radical change. [12:43], [13:01] - **AI is a significant, but not transformative, innovation**: While acknowledging AI as a major development, Thiel places it roughly on the scale of the internet in the late 1990s, suggesting it's not enough to fundamentally end societal stagnation on its own. [30:56], [31:02] - **The Antichrist narrative is evolving**: Thiel posits that the modern Antichrist figure would not be a technological genius, but rather someone who leverages fear of existential risks and technological change to impose a 'peace and safety' totalitarian order. [49:33], [50:15]

Topics Covered

  • Technological Stagnation: Why the world is still stuck.
  • Why societies need growth to avoid unraveling.
  • Escaping stagnation requires embracing more risk.
  • Can populism drive disruptive political change?
  • Is AI a true breakthrough or deeper stagnation?

Full Transcript

Is Silicon Valley recklessly ambitious?

What should we fear more:

Armageddon or stagnation?

Why is one of the world’s most successful investors worrying

about the Antichrist?

My guest today is the co-founder

of PayPal and Palantir, and an early investor

in the political careers of Donald Trump and JD Vance.

Peter Thiel is the original tech right power player,

well known for funding a range of conservative and simply

contrarian ideas.

But we’re going to talk about his own ideas because despite

the slight handicap of being a billionaire,

there’s a good case that he’s the most influential right

wing intellectual of the last 20 years.

Peter Thiel, welcome to "Interesting Times."

Thanks for having me.

You’re very welcome.

Thanks for being here.

So I want to start by taking you back in time,

about 13 or 14 years.

You wrote an essay for National Review.

The conservative magazine called "The End of the Future."

And basically the argument in that essay

was that the dynamic, fast-paced, ever-changing

modern world was just not nearly as dynamic

as people thought,

and that actually, we'd entered a period

of technological stagnation.

That digital life was a breakthrough, but not

as big a breakthrough as people had hoped.

And that the world was kind of stuck, basically.

And you weren’t the only person to make arguments like

this, but it had a special potency coming from you

because you were a Silicon Valley insider who had gotten

rich in the digital revolution.

So I’m curious: In 2025, right.

Do you think that diagnosis still holds?

Yes, I still broadly believe in the stagnation thesis.

It was never an absolute thesis.

So the claim was not that we were absolutely, completely

stuck.

It was in some ways a claim about the velocity had slowed.

It wasn’t 0, but that we were, I don’t know. From 1750 to 1970,

200 plus years, were periods of accelerating change where we

were relentlessly.

We’re moving faster.

The ships were faster, the railroads were faster,

the cars were faster, the planes were faster.

It culminates in the Concorde and the Apollo missions.

And then that in all sorts of dimensions, things had slowed.

There was, I always made an exception

for the world of bits.

So we had computers and software and internet

and mobile internet.

And then the last 10, 15 years you had crypto and the A.I.

revolution, which I think is in some sense pretty big.

But the question is: Is it enough to really get out

of this generalized sense of stagnation?

And there’s an epistemological question you can start with

on the "Back to the Future" essays:

How do we even how do we even know whether we’re

in stagnation or acceleration?

Because one of the features of late modernity

is that people are hyperspecialized.

And so, can you say that we’re not making progress in physics

unless you’ve devoted half your life to studying string

theory?

Or what about quantum computers?

Or what about cancer research and biotech and all

these verticals?

And then how much does progress

in cancer count versus string theory?

So you have to give weightings to all these things.

So it’s, in theory it’s an extremely,

an extremely difficult question to get a handle

of because - yeah, the fact that it’s so hard to answer that we

have ever narrower groups of guardians guarding themselves

is itself cause for skepticism.

And so yes, I think broadly we’re in this world that’s

still pretty stuck.

It’s not absolutely stuck.

Yeah. You mentioned "Back to the Future."

We just showed our kids the original "Back

to the Future."

The first one with Michael J. Fox and of course-

Yeah, it was like 1955 to 1985, 30 years back.

And then the "Back to Future II"

was, I think 1985 to 2015, which

is now a decade in the past.

And that’s where you had flying cars.

And the 2015 future is wildly divergent from the 1985.

The 2015 future.

did have Biff Tannen as a Donald Trump-like

figure in some kind of power.

So it had some kind of prescience.

But yeah, the big, the big noticeable thing

is just how different the built environment looks.

And so one of the strongest cases for stagnation that I’ve

heard is that yeah, if you put someone in a time machine from

various points, they would recognize themselves to be

in a completely different world if they left 1860 or 18

90 to 1970, if those were the 80 years of your lifetime

or something like that.

But the world just to my kids, even as children of 2025,

looking at 1985, it’s like the cars were a little different.

And no one has phones, but the world seems fairly similar.

So that’s a kind of non-statistical.

But that’s the common sense.

That’s the common sense understanding.

But are there what would convince you

that we were living through a period of takeoff.

Is it just economic growth.

Is it productivity growth.

Like what are their numbers for stagnation versus dynamism

that you look at.

Sure it would be.

Well, the economic number would just

be what are your living standards compared

to your parents.

If you’re a 30-year-old millennial or how are you

doing versus when your parents,

your Boomer parents were 30 years old,

how did they do at that time.

There are intellectual questions.

How many breakthroughs are we having.

How do we quantify these things.

What are the returns of going into research.

There certainly are diminishing returns

to going into science or going into academia generally.

And then maybe this is why I’m so much of it feels like this

sociopathic, Malthusian kind of an institution,

because you have to throw more and more and more at something

to get the same returns.

And at some point, people give up and the thing collapses.

Well right.

So let’s pick up on that.

Why should we want growth and dynamism because, as you’ve

pointed out in some of your arguments on the subject,

right, there is a kind of cultural change that happens

in the Western world in the 1970s,

around the time you think things slow down and start

to stagnate, where people become very anxious about

the costs of growth, the environmental costs,

above all.

And the idea being end up with a widely shared perspective

that we’re rich enough.

And if we try too hard to get that much richer,

the planet won’t be able to support us.

We’ll have degradation of various kinds.

And we should be content with where we are.

So what’s wrong with that argument.

Well, I think there are deep reasons

the stagnation happened.

So there are always three questions

you can ask about history.

What actually happened.

And there’s a question you get to what should be done about

it.

But there’s also this intermediate question why did

it happen.

People ran out of ideas, I think

to some extent the institutions degraded

and became risk averse.

And these cultural transformations

we can describe.

But then I think to some extent,

also people had some very legitimate worries

about the future, where if we continue

to have accelerating progress, were you accelerating

towards environmental apocalypse

or nuclear apocalypse or things like that.

But I think if we don’t find a way back to the future,

I do think the society.

I don’t it unravels.

It doesn’t work.

The way middle the middle class.

I would define the middle class

as the people who expect their kids to do better

than themselves.

And when that expectation collapses,

we no longer have a middle class society.

And maybe there’s I mean, maybe there’s some way you can

have a feudal society in which things are always static

and stuck, or maybe there’s some way you can shift to some

radically different society, but it’s not the way.

It’s not the way the Western world.

It’s not the way the United States has functioned

for the first 200 years of its existence.

So you think that ordinary people won’t accept stagnation

in the end, it’s that they will rebel and pull things

down around them in the course of that rebellion.

They may rebel or our institutions don’t work.

All of our institutions are predicated on growth.

Our budget, our budgets are certainly

predicated on growth.

Yeah if you say, I don’t Reagan and Obama Reagan was

consumer capitalism, which is oxymoronic.

It was borrow you don’t save money as a capitalist.

You borrow money.

And Obama was low tax socialism,

just as oxymoronic as the consumerist capitalism

of Reagan.

And yeah, I low tech socialism way better than high tax

socialism, but I worry that it’s not sustainable.

At some point you either the taxes go up

or the socialism ends.

So it’s deeply, deeply unstable.

And that’s why people are they’re not optimistic.

They don’t think we’ve hit some stable the Greta future.

Maybe it can work.

This is the Greta Thunberg.

Just to be clear, that’s a reference to Greta Thunberg,

the activist best known for anti-climate change protests.

Who to you, I would say represents

a kind of symbol of a anti-growth, effectively

authoritarian, environmentalist

dominated future.

Sure but we’re not there yet.

We’re not there yet.

It would be.

It would be a very, very different society

if you actually lived in a kind

of degrowth small Scandinavian villages.

I’m not sure would be North Korea, but it would be.

It would be super oppressive.

One thing that’s always struck me is that when you have this

sense of stagnation, a sense of decadence,

right to use a word that I like to use for it

in a society.

You then also have people who end up

being kind of eager for a crisis, right.

Eager for a moment to come along where they can.

They can radically redirect society from the path it’s on.

Because I tend to think that in rich societies

you hit a certain level of wealth.

People become very comfortable,

they become risk averse, and it’s just hard.

It’s hard to get out of decadence, into something,

into something new, without a crisis.

So the original example for me was after September 11.

There was this whole mentality among foreign policy

conservatives that we had been decadent and stagnant,

and now is our time to wake up and launch a new crusade

and remake the world.

And obviously that ended very badly.

But something similar it was Bush 43

just told people to go shopping right away.

So it wasn’t anti decadent for the most part.

So there was maybe there was some neocon foreign policy

enclave in which people were LARPing as a way

to get out of decadence.

But the dominant thing was Bush 43 people telling people

just to go shopping.

So what risks should you be willing to take

to escape decadence?

It does seem like there’s a danger here where the people

who want to be anti decadent have to take on a lot of risk.

They have to say, look, you’ve got this nice, stable,

comfortable society.

But guess what.

We’d we’d like to have a war or a crisis or a total

reorganization of government and so on.

They have to lean into danger.

I don’t know if I have to answer.

I don’t know if I have to give you a precise answer,

but my directional answer is a lot more.

We should take a lot more risk.

We should be doing a lot more.

And I can go through all these different verticals.

If we look at biotech, something like dementia,

Alzheimer’s, we’ve made zero progress in 40 to 50 years.

People are completely stuck on beta amyloid.

It’s obviously not working.

It’s just some kind of a stupid racket where the people

are just reinforcing themselves.

And so Yes, we need to take way

more risk in that department.

Well, let’s I want to ask to keep us in the concrete.

I want to stay with that example for a minute

and ask, O.K, what does that mean.

Saying we need to take more risks in anti-aging research.

Does it mean that the FDA has to step back and say,

anyone who has a new treatment for Alzheimer’s can go ahead

and sell it on the open market.

Like what is risk in the medical space look like.

Yeah you would take you would take a lot more risk.

If you have disease, there probably are a lot more risks.

You can take.

There are a lot more risks the researchers can take.

Culturally, what I imagine it looks like is early modernity

where people Yeah, they thought

we would cure diseases.

They thought we would have radical life extension.

Immortality that was part of the project

of early modernity.

It was Francis Bacon, Condorcet it was

and maybe it was maybe it was anti-Christian,

maybe it was downstream of Christianity.

It was competitive.

If Christianity promised you a physical Resurrection science

was not going to succeed unless it promised you

the exact same thing.

But I remember 1999, 99 2000.

When we were running PayPal, one of my co-founders,

Luke Nozick, he was into Alcor and cryonics and people

should freeze themselves and and we had one day

where we took the whole company to a freezing party,

a Tupperware party.

People sell Tupperware policies at a freezing party.

They sell their.

Was it just your heads.

What was going to be frozen.

You could get could get a full body or just a head.

Just the head was cheaper.

It was disturbing.

With a dot matrix printer didn’t quite work.

And so the freezing policies couldn’t be couldn’t be

printed out.

But in retrospect, this was still technological stagnation

once again.

But it was.

But it’s also a symptom of the decline because in 1999,

this was not a mainstream view,

but there was still a fringe Boomer view where they still

believed they could live forever.

And that was the last generation.

And so I’m always anti Boomer, but maybe there’s something

we’ve lost even in this fringe Boomer narcissism where there

were at least a few boomers who still believed science

would cure all their diseases.

No one, no one who’s a millennial believes that

anymore.

I think there are some people who

believe in a different kind of immortality, though,

right now.

I think part of the fascination with A.I.

is connected to a specific, a specific vision

of transcending limits.

And I’m going to ask you about that after I ask you about politics, because one

of the striking things I thought about your original

argument on stagnation, which was mostly about technology

and the economy, was that it could be applied to a pretty

wide range of things.

And at the time you were writing that essay,

you were interested in seasteading.

This idea of ideas of essentially building

new polities independent of the sclerotic Western world.

But then you made a pivot in the 2010s.

So you were one of the few prominent, maybe

the only prominent Silicon Valley

supporter of Donald Trump.

In 2016, you supported a few carefully selected

Republican Senate candidates.

One of them is now the vice president

of the United States.

And my view as an observer of what you were doing

was that you were basically being a kind of venture

capitalist for politics.

You were saying, here are some disruptive agents who might

change the political status quo,

and it’s worth a certain kind of risk here.

Is that how you thought about it.

Sure there were all sorts of levels.

I mean, one level was yeah, it was these hopes that we could

redirect the Titanic from the iceberg it was heading to

or whatever the metaphor has really

changed course as a society through political,

through political change.

Maybe a narrower a much narrower aspiration

was that we could maybe at least

have a conversation about this when someone like Trump

said, make America great again.

O.K is that a positive, optimistic, ambitious agenda,

or is it merely a very pessimistic assessment

of where that we are no longer a great country.

And I didn’t have great expectations about what Trump

would do in a positive way.

But I thought, at least for the first time in 100 years,

we had a Republican who was not giving us this syrupy Bush

nonsense.

And that was not the same as progress,

but we could at least have a conversation.

In retrospect, this was a preposterous fantasy.

I had these two thoughts and in 2016,

and you often have these ideas that

are just below the level of your consciousness.

But the two thoughts I had that I wasn’t able to combine

was, number one, no, nobody would be mad at me

for supporting Trump if he lost.

And number two, I thought he had a 50/50 chance of winning.

And then I had this implicit, why would nobody be mad at you

if he lost.

It would just be such a weird thing and it wouldn’t really

matter.

But then I thought he had more.

He had.

I thought he had a 50/50 chance because the problems

were deep and the stagnation was frustrating.

And then the fantasy I had was yeah, if he won,

we could have this conversation.

And the reality was people weren’t ready for it.

And then maybe we’ve progressed to the point where

we can have this conversation at this point in 2025,

a decade after Trump.

And of course, you’re not a zombie left wing person.

Ross but but this is I’ve been called many things.

Many things I’ll take.

I’ll take whatever progress I can get.

So from your perspective of.

So let’s say there’s two layers.

There’s a basic sense of this society needs disruption.

It needs risk.

Trump is disruption, Trump is risk and Trump is.

And then the second level is Trump

is actually willing to say things

that are true about American decline, right.

So do you feel like you as an investor, as a venture

capitalist, got anything out of the first Trump term.

Like what did Trump do in his first term

that you felt was anti decadent or anti stagnation?

If anything, maybe the answer is nothing.

Well I think we I think it took longer

and it was slower than I would have liked.

But we have, we have gotten to the place where a lot

of people think something’s gone wrong.

And that was not the conversation

I was having in 2012, 2013, 2014.

I had a debate with Eric Schmidt in 2012

and Marc Andreessen in 2013 and Bezos in 2014.

I was on the there’s a stagnation problem,

and all three of them were versions of everything’s going

great.

And I think at least those three people have,

to varying degrees, updated and adjusted.

Silicon Valley is adjusted and Silicon Valley, though

has more than adjusted.

A a big part of Silicon Valley on the stagnation

on the stagnation, stagnation.

But then a big part of Silicon Valley

ended up going in for Trump in 2024, including, obviously,

most famously, Elon Musk.

Yeah, this is deeply linked to the stagnation

issue in my telling.

I mean, these things are always super complicated,

but my telling is I don’t.

And again, I’m so hesitant to speak for all these people.

But someone like Mark Zuckerberg or Facebook meta.

And in some ways, I don’t think he was very ideological.

He didn’t think this stuff through that much.

It was the default was to be liberal.

And it was always what if the liberalism isn’t working,

what do you do.

And for year after year after year,

it was do more if something doesn’t work,

you just need to do more of it.

And you up the dose and you up the dose and you

spend hundreds of millions of dollars

and you go completely woke and everybody hates you.

And at some point, it’s like, O.K, maybe this isn’t working.

So they pivot.

So it’s not a pro-trump thing.

It’s not a pro-trump thing, but it is just both in public

and private conversations.

It is a kind of sense that Trumpism and populism in 2024,

maybe not in 2016, when Peter was out there

as the lone supporter.

But now in 2024, they can be a vehicle

for technological innovation, economic dynamism.

So that’s your framing it really,

really optimistically here.

So I. Well the people but I think I know you’re

pessimistic.

You frame this optimistically.

You’re just saying these people are going to be

disappointed and they’re just set up for failure.

And things like, I mean, people

expressed a lot of optimism.

That’s all I’m saying.

Elon Musk expressed a lot of I mean,

he expressed some apocalyptic anxieties

about how budget deficits were going to kill us all.

But he came into government and people around him came

into government basically saying,

we have a partnership with the Trump administration,

and we’re pursuing technological greatness.

I think they were optimistic.

And so I’m you’re coming from a place of greater pessimism

or realism.

So I’m just what I’m asking for is your assessment

of where we are, not their assessment.

But like, do you think does populism in Trump 2.0

look like a vehicle for technological dynamism to you.

It’s still by far the best option we have, I don’t think.

I don’t know.

Is Harvard going to cure dementia by just puttering

along, doing the same thing that hasn’t worked for 50

years.

So that’s just a case for it.

Can’t get it, can’t get worse.

Let’s do disruption.

But the critique of the critique of populism right

now would be Silicon Valley made an alliance

with the populists.

But in the end, the populists don’t care about science.

They don’t want to spend money on science.

They want to kill funding to Harvard just because they

don’t like Harvard.

And in the end, you’re not going to get the kind

of investments in the future that Silicon Valley wanted.

Is that wrong.

Yeah, but it.

We have to go back to this question of, how well is this.

Is the science working in the background.

This is where the new dealers.

Whatever was wrong with them.

They pushed science hard and you funded it,

and you gave money to people and you scaled it.

And whereas today, if there was an equivalent of Einstein

and he wrote a letter to the White House,

it would get lost in the mail room,

and the Manhattan Project is unthinkable.

If we call something a moonshot the way this is

the way Biden talked about, let’s say, cancer research,

a moonshot in the 60s still meant that you went

to the moon.

A moonshot now means something completely fictional.

That’s never going to happen.

No, you need a moonshot for that.

It’s not like we need an Apollo program.

It means it’s never, ever going to happen.

And so.

But it seems like then you’re still in the mode of for you,

as opposed to maybe for some other people in Silicon

Valley.

The value of populism is in tearing away the veils

and illusions, and we’re not necessarily in the stage where

you’re looking to the Trump administration to build

the new, to do the Manhattan Project, to do the moonshot.

It’s more like populism helps us see that it was all fake.

You need to try to do both.

And they’re very entangled with each other.

And I don’t know, there’s a deregulation of nuclear power.

And at some point, at some point we’ll get back

to building, new nuclear power plants or better designed

ones, or maybe even fusion reactors.

And, and so, yes, there’s a deregulatory,

deconstructive part.

And then at some point, you actually to get

to construction and it’s all things like that.

So yeah, in some ways, in some ways you’re clearing the field

and then but you’ve maybe but you’ve personally stopped

funding politicians I am schizophrenic on this stuff.

I think it is.

It is it’s incredibly important and it’s incredibly

toxic.

And so I go I go back and forth

on incredibly toxic for you personally for everybody,

everybody who gets involved.

It’s 0 sum.

It’s crazy.

And then it’s and then in some ways because everyone hates

you and associates you with Trump.

Like how is it toxic for you personally.

It’s toxic because it’s in a zero sum world.

The stakes in it feel really, really high.

And you end up having enemies you didn’t have before.

It’s toxic for all the people who get involved in different

ways.

There is a political dimension of getting "Back to the Future."

I don’t know.

This is a conversation I had with Elon back in 2024.

And we had all these conversations.

I had the seasteading version with Elon where I said

if Trump doesn’t win, I want to just leave the country.

And then Elon said, there’s nowhere to go.

There’s nowhere to go.

This is the only place.

And then you always think of the right arguments

to make later.

And it was about two hours after we had dinner and I was

home that I thought of, wow, Elon,

you don’t believe in going to Mars anymore.

2024 2024 is the year where Elon

stopped believing in Mars.

Not as a silly science tech project, but as

a political project.

Mars was supposed to be a political project.

It was building an alternative.

And in 2024, Elon came to believe

that if you went to Mars the socialist US

government, the woke A.I., it would follow you to Mars.

It was the dumbest meeting with Elon that we brokered.

He was doing DeepMind.

This is an A.I. company.

Yeah this was the rough conversation was Dennis tells

Ellen, I’m working on the most important project

in the world.

I’m building a superhuman A.I.

And Ellen responds to Dennis, well,

I’m working on the most important project

in the world.

I am turning us into an interplanetary species.

And then Dennis said, well, my A.I.

will be able to follow you to Mars and.

And then Ellen went quiet.

But in my telling of the history,

it took years for that to really hit Ellen.

It took him until 2024 to process it.

But that doesn’t mean he doesn’t believe in Mars.

It just means that he decided he

had to win some kind of battle over budget deficits

or wokeness to get to Mars.

What does Mars mean.

Is it a Yeah.

Is it.

And again, it’s what does Mars mean.

Well, it was is it just is it just a scientific project

or is it I don’t know.

Is it like A.I. don’t know, high vision of a new society.

Yeah Heinlein.

Populated by many, many libertarian paradise

or something like Elon Musk.

Well, I assume it was concretized that specifically.

But if you concretize things, then maybe you realize that

Mars is supposed to be more than a science project,

it’s supposed to be a political project.

And then when you concretize it,

you have to start thinking through,

well, the I woke, I will follow you,

the socialist government will follow you,

and then maybe you have to do something other

than just going to Mars.

O.K., so the woke A.I.

Artificial intelligence seems like one.

If we’re still stagnant, it’s the biggest exception

to stagnation.

Yes it’s the place where there’s been.

Yes remarkable progress.

Surprising to many people.

Progress it’s also the place we were just talking about

politics.

It’s the place where the Trump administration is, I think,

to a large degree, giving A.I. investors a lot of what they

wanted in terms of both stepping back and doing public

private partnerships.

So it’s a zone of progress and governmental engagement.

And you are an investor in A.I.

What do you think you’re investing in?

Well, I don’t know, there’s a lot of layers to this.

So I do think I know there’s one,

one question we can frame is just how big how big a thing

do I think AI is.

And I don’t my stupid answer is it’s somewhere it’s more

than a nothing burger and it’s less than the total

transformation of our society.

So my placeholder is that it’s roughly on the scale

of the internet in the late 90s,

which is I’m not sure it’s enough to really end

the stagnation.

It might be enough to create some great companies.

And the internet added maybe a few points, percentage points

to the GDP, maybe 1 percent to GDP growth every year

for 10, 15 years.

It added some to productivity.

And so that’s of roughly my placeholder for I.

It’s the only thing we have.

It’s a little bit unhealthy that it’s so unbalanced.

This is the only thing we have.

I’d like to have more multi-dimensional progress.

I’d like us to be going to Mars.

I’d like us to be having cures for dementia.

If all we have is I will take it.

There are risks with it.

There are obviously there are dangers with this technology.

So you’re a skeptic.

But then you are a skeptic of what you might call

the superintelligence cascade theory,

which basically says that if AI succeeds,

it gets so smart that it gives us the progress in the world

of atoms, that it’s like, all right, we can’t cure dementia.

We can’t figure out how to build the perfect factory that

builds the rockets that go to Mars.

But I can and and at a certain point,

you pass a certain threshold and it gives us

not just more digital progress, but 64 other forms

of progress.

It sounds like you don’t believe that,

or you think that’s less likely.

Yeah I somehow don’t know if that’s been really the gating

factor.

What does that mean.

The gating factor.

It’s probably a Silicon Valley ideology.

And maybe in a weird way it’s more liberal than

a conservative thing.

But people are really fixated on IQ in Silicon Valley

and that it’s all about smart people.

And if you have more smart people, they will.

Do great things.

And then the economics antique argument

is that people actually do worse.

The smarter they are, the worse they do.

And, it’s just they don’t know how to apply it or our society

doesn’t know what to do with them and they don’t fit in.

And so that suggests that the gating factor isn’t IQ,

but something that’s deeply wrong with our society.

So is that a limit on intelligence

or a problem of the personality types.

Well, it’s human superintelligence creates.

I mean, I’m very sympathetic to the idea.

And I made this case when I did an episode of this podcast

with AI accelerationist that just throwing

that certain problems can just be solved

if you ramp up intelligence.

It’s like we ramp up intelligence and boom,

Alzheimer’s is solved.

We ramp up intelligence and the I can,

figure out the automation process that builds you

a billion robots overnight.

I’m an intelligent skeptic in the sense I don’t think.

Yeah, I think you probably have limits.

It’s hard to prove one way.

It’s always hard to prove these things.

But I until we have the superintelligence,

I share your intuition because I think we’ve had a lot

of smart people and things have been stuck for other

reasons.

And so maybe the problems are unsolvable,

which is the pessimistic view.

Maybe there is no cure for dementia at all.

And it’s a deeply unsolvable problem.

There’s no cure for mortality.

Maybe it’s an unsolvable problem,

or maybe it’s these cultural things.

So it’s not the individually smart person,

but it’s how this fits into our society.

Do we tolerate heterodox smart people.

Maybe it’s maybe you need heterodox smart people

to crazy experiments and if the AI is just conventionally

smart, if we define wokeness, again,

wokeness is too ideological, but if you just define it

as conformist, maybe that’s not the kind of smartness

that’s going to make a difference.

So do you fear, then, a plausible future where AI,

in a way becomes itself stagnation,

that it’s like highly intelligent,

creative in a conformist way.

It’s like the Netflix algorithm.

It makes infinite O.K movies that people watch.

It generates infinite O.K IDs.

It puts a bunch of people out of work

and makes them obsolete.

But it doesn’t.

It like deepens stagnation in some way.

Is that a fear.

It’s like people just outsource.

It’s quite possible that that’s certainly a risk.

But I guess where I end up is I still

think we should be trying.

And that the alternative is just total stagnation.

So yeah, there’s all sorts of interesting things going

to happen with maybe drones in a military context are

combined with AI and O.K, this is kind of scary or dangerous

or dystopian or it’s going to change things.

But if you don’t have AI, Wow, there’s just nothing going on.

And I don’t this is there’s a version of this discussion

on the internet where did the internet lead to more

conformity and more wokeness.

And yeah, there are all sorts of ways where it didn’t lead

to quite the cornucopian diverse explosion of ideas

that libertarians fantasized about in 1999.

But counterfactually, I would argue that it was still better

than the alternative, that if we hadn’t had the internet,

maybe it would have been worse.

I bet it’s better than the alternative.

And the alternative is nothing at all.

Because the.

Look here’s one place where the stagnation arguments are

still reinforced.

The fact that we’re only talking about I feel,

is always an implicit acknowledgment that.

But for we are like in almost total stagnation.

But the world of A.I. is clearly filled with people who

at the very least seem to have a more utopian,

transformative, whatever word you want to call it view

of the technology than you’re expressing here.

And you were mentioned earlier,

the idea that the modern world used to promise radical life

extension and doesn’t anymore.

It seems very clear to me that a number of people deeply

involved in artificial intelligence

see it as a kind of mechanism for transhumanism,

for transcendence of our mortal flesh,

and either some kind of creation of a successor

species or some kind of merger of mind and machine.

And do you think that’s just all kind of irrelevant

fantasy, or do you think it’s just hype.

Do you think people are trying to raise money by pretending

that we’re going to build a machine.

God right.

Is it hype.

Is it delusion.

Is it something you worry about.

I think you would.

You would prefer the human race to endure.

You’re hesitating.

Well, I Yes, I would.

This is a long hesitation.

There’s a long hesitation.

There’s so many questions.

And should the human race survive.

Yes O.K. But I also would.

I also would like us to radically solve

these problems.

And so it’s always I don’t know.

Yeah transhumanism is this the ideal was

this radical transformation where

your human natural body gets transformed

into an immortal body.

And there’s a critique of let’s say,

the trans people in the sexual context or I don’t

transvestite is someone who changes their clothes

and cross-dresses, and a transsexual is someone where

you change your I don’t penis into a vagina.

And we can then debate how well those surgeries work,

but we want more transformation than that.

It’s the critique is not that it’s weird and unnatural.

Man, it’s so pathetically little.

And we want more than cross-dressing or changing

your sex organs.

We want you to be able to change your heart

and change your mind and change

your whole body and then Orthodox Christianity.

By the way, the critique Orthodox Christianity has

of these things don’t go far enough like that.

Transhumanism is just changing your body.

But you also need to transform your soul,

and you need to transform your whole self.

And so.

But the other one.

Wait wait wait, sorry, I generally agree with your

what I think is your belief that religion should

be a friend to science and ideas of scientific progress.

I think any idea of divine Providence

has to encompass the fact that we have progressed

and achieved and done things that

would have been unimaginable to our ancestors.

But it still also seems like, Yeah,

the promise of Christianity in the end is you get

the perfected body and the perfected soul through God’s

grace.

And the person who tries to do it on their own

with a bunch of machines is likely to end up

as a dystopian character.

Well it’s.

Let’s articulate this.

And you can have a heretical form of Christianity.

That says something else.

I don’t know.

I think the word nature does not occur once

in the Old Testament.

And so if you and there is a word

in which a sense in which the way I understand,

the judeo-christian inspiration is it

is about transcending nature.

It is about overcoming things.

And the closest thing you can say to nature is that people

are fallen and that that’s the natural thing in a Christian

sense, is that you’re messed up.

And that’s true.

But there’s some ways that with God’s help are supposed

to transcend that and overcome that.

But the people, if you just present say you’re accepted,

present company accepted.

Most of the people working to build

the hypothetical machine.

God don’t think that they’re cooperating with Yahweh,

Jehovah, the Lord of hosts.

They think they think that they’re building immortality

on their own.

Yeah right.

We’re jumping around a lot.

A lot of things.

So again the critique I was saying is they’re not

ambitious enough.

From a Christian point of view,

these people are not ambitious enough.

Now then we get into this question.

Well, are they not.

But they’re not morally and spiritually ambitious enough.

And are they.

And then are they are they still

physically ambitious enough.

And are they are they even still really transhumanists?

And this is where O.K. Man, the cryonics thing that seems

like a retro thing from 1999.

There isn’t that much of that going on.

So they’re not transhumanists on a physical body.

And then, O.K, well, maybe it’s not about cryonics,

maybe it’s about uploading.

O.K, well, it’s not quite.

I’d rather have my body.

I don’t want just a computer program that simulates me.

So that uploading seems like a step down from cryonics, but.

But then even that’s it’s part of the conversation.

And this is where it gets very hard to score.

And I don’t want to say they’re all making it up

and it’s all fake, but I don’t think you think some of it’s

fake.

I don’t think it’s fake implies people are lying.

But I want to say it’s not the center of gravity.

Yeah and so there is.

Yeah, there is a cornucopian language.

There’s an optimistic language.

A conversation I had with Elon a few weeks ago about this

was, he said, we’re going to have a billion humanoid robots

in the US in 10 years.

And I said, well, if that’s true,

you don’t need to worry about the budget deficits

because we’re going to have so much growth.

The growth will take care of this.

And then.

Well, he’s still worried about the budget deficits.

And then this doesn’t prove that he doesn’t believe

in the billion robots.

But it suggests that maybe he hasn’t thought it through

or that he doesn’t think it’s going to be as transformative

economically or that there are big error bars around it.

But yeah, there’s some way in which these things are not

quite thought through.

If I had to give a critique of Silicon Valley,

it’s always bad at what the meaning of tech is

and the conversations, they tend to go into this

microscopic thing where it’s O.K, it’s like,

what are the IQ, Helo scores of the AI.

And exactly how do you define AGI.

And we get into all these endless technical debates.

And there are a lot of questions

that are at an intermediate level of meaning

that seem to me to be very important, which is like,

what does it mean for the budget deficit.

What does it mean for the economy.

What does it mean for geopolitics.

One of the conversations, we had recently was

and I had was, does it change the calculus

for China invading Taiwan, where

we have an accelerating AI revolution in the military.

Is China falling behind.

And will this and maybe on the optimistic side,

it deters China because they’ve effectively lost.

And on the pessimistic side it accelerates them because they

know it’s now or never.

If they don’t grab Taiwan now they will fall behind.

And either way, this is a pretty important thing.

It’s not thought through.

We don’t think about what AI means for geopolitics.

We don’t think about what it means for the macro economy.

And those are the kinds of questions I’d want us to push

more.

There’s also a very macroscopic question that

you’re interested in that, will pull on the religion

thread a little bit here.

You have been giving talks recently

about the concept of the Antichrist, which

is a Christian concept, an apocalyptic concept.

What does that mean to you.

What is the antichrist?

How much time do we have.

We’ve got as long.

As much time as you have to talk about the Antichrist.

All right, well, I have A.I. could talk about it,

but we’re near time.

I mean, but no, I think there’s always a question,

how do we articulate some of these existential risks,

some of the challenges we have.

And they’re all framed in this runaway dystopian science

text.

There’s a risk of nuclear war.

There’s a risk of environmental disaster.

Maybe something specific like climate change.

Although there are lots of other ones

we could come up with.

There’s a risk of I don’t know, bioweapons.

You have all the different sci-fi scenarios.

Obviously, there are certain types of risks with A.I.

But I always think that if we’re going to have this frame

of talking about existential risks,

perhaps we should also talk about the risk of another type

of a bad singularity, which I would describe as the one

world totalitarian state because I would say

the political solution, the default political solution

people have for all these existential risks is one world

governance.

What do you do about nuclear weapons.

We have a United Nations with real teeth that controls them.

And it’s they’re controlled by an international political

order.

And then something like this is also,

what do we do about A.I. and we need

global compute governance.

We need a one world government to control all the computers,

log every single keystroke to make sure people don’t program

a dangerous A.I.

And I’ve been wondering whether that’s going from

the frying pan into the fire.

And so the atheist philosophical framing

is one world or none.

That was a short film that was put out by the Federation

of American Scientists in the late 40s,

starts with a nuclear bomb blowing up the world.

And obviously you need a one world government

to stop it, one world or none.

And the Christian framing, which in some ways

is the same question, is Antichrist or armageddon?

You have the one world state of the Antichrist,

or we’re sleepwalking towards Armageddon.

One world or none.

Anti-christ or Armageddon.

On one level are the same.

Question now, I have a lot of thoughts on this topic,

but one question is and this was

a plot hole in all these Antichrist books

people wrote, how does the Antichrist take over

the world.

He gives these demonic, hypnotic speeches and people

just fall for it.

And so it’s this plot hole.

It’s this demonic.

It’s totally it’s implausible.

It’s a very implausible plot hole.

But I think we have an answer to this plot hole.

The way the Antichrist would take over the world

is you talk about Armageddon non-stop,

you talk about existential risk non-stop.

And this is what you need to regulate.

It’s the opposite of the picture of baconian science

from the 17, 18th century, where the Antichrist is like

some evil tech genius, evil scientist who invents this

machine to take over the world.

People are way too scared for that.

In our world, the thing that has political resonance

is the opposite.

It is.

It is.

The thing that has political resonance

is we need to stop science.

We need to just say stop to this.

And this is where Yeah, I don’t know.

In the 17th century, I can imagine a Doctor Strangelove

Edward Teller type person taking over the world.

In our world, it’s far more likely to be Greta Thunberg.

O.K, I want to suggest a middle ground

between those two options.

It used to be that the reasonable fear

of the Antichrist was a kind of Wizard of technology,

and now the reasonable fear is someone

who promises to control technology, make it safe,

and Usher in what, from your point of view,

would be a kind of universal stagnation.

Well, it’s more that’s more my description of how it would

happen.

So I think people still have a fear of a 17th century

anti-christ.

We’re still scared of Doctor Strangelove, right.

But you’re saying you’re saying the real Antichrist

would play on that fear and say,

you must come with me to avoid Skynet,

to avoid the Terminator, to avoid nuclear armageddon?

Yes And I guess my view would be looking at the world

right now, that you would need a certain kind

of novel technological progress

to make that fear concrete.

So I can buy that the world could

turn to someone who promised peace and regulation.

If the world became convinced that I was

about to destroy everybody.

But I think to get to that point,

you need one of the accelerationist apocalyptic

scenarios to start to play out to get your peace and safety

anti-christ, you need more technological progress.

Like one of the key failures of totalitarianism in the 20th

century was it had a problem of knowledge.

It couldn’t know what was going on.

All over in the world.

So you need the A.I. or whatever else

to be capable of helping the peace and safety totalitarian

rule.

So don’t you think you need essentially need your worst

case scenario to involve some burst of progress that is then

tamed and used to impose stagnant totalitarianism.

You can’t just get there from where we are right now.

Well, it can Greta Thunberg’s on a boat

in the Mediterranean.

Protesting Israel the.

I just don’t see the promise of safety from A.I.,

safety from tech safety, even safety from climate change

right now as a powerful universal rallying cry.

Absent accelerating change and real fear

of total catastrophe.

I mean, these things are so hard to score,

but I think environmentalism is pretty powerful.

I don’t know if it’s I don’t know if it’s absolutely

powerful enough to create a one world totalitarian state.

But man, it is.

I think it is not.

It is in its current form.

It is.

I want to say it’s the only thing people still believe

in Europe.

They believe in the green thing

more than Islamic Sharia law or more

than in the Chinese Communist totalitarian takeover.

And the future is an idea of a future that looks

different from the present.

The only three on offer in Europe

are green, Sharia and the totalitarian Communist state.

And the green one is by far the strongest

and in a declining, decaying Europe.

But it’s not a dominant player in the world.

It’s always in a context.

And then, I don’t we had this really complicated history

with the way nuclear technology worked.

And we O.K. We didn’t Yeah.

We didn’t really get to a totalitarian one world state.

But by the 1970s, one account of the stagnation is that

the runaway progress of technology had gotten very

scary and that baconian science, it ended at Alamos.

And then it was O.K. It ended there.

And we didn’t want to have any more.

And, when Charles Manson took LSD in the late 60s

and started murdering people, what he saw on LSD,

what he learned was that you could be like Dostoevsky,

an anti-hero in Dostoevsky, and everything was permitted.

And of course, not everyone became Charles Manson,

but Charles Manson.

But crucially of the history, everyone became as deranged

as Charles Manson.

But Charles Manson did not become the Antichrist

and take over the world.

I’m just.

I’m just.

We’re ending.

We’re ending in the apocalyptic.

No, but you’re my telling of the.

My telling of the history of the 1970s is the hippies did

win and they.

But we landed.

We landed on the moon in July of 1969.

Woodstock started three weeks later.

And with the benefit of hindsight,

that’s when progress stopped and the hippies won.

And yeah, it was not literally Charles Manson.

But you’re just I want to stay with the Antichrist just

to end.

Because and you’re retreating, you’re saying,

O.K environmentalism is already pro stagnation

and so on.

O.K, let’s agree with all that.

I’m just saying we’re not living under we’re not living

under the Antichrist right now.

We’re just stagnant.

And you’re positing that something worse could be

on the horizon.

That would make stagnation permanent.

That would be driven by fear.

And I’m suggesting that for that to happen,

there would have to be some burst of technological

progress that was akin to Alamos that people are afraid

of.

And I guess this is my very specific question for you,

right.

Is that, well, you are you’re an investor in A.I.

You’re deeply invested in Palantir,

in military technology and technologies,

of surveillance and technologies of warfare

and so on.

And it just seems to me that when you tell me

a story about the Antichrist coming to power

and using the fear of technological change

to impose order on the world, I

feel like that Antichrist would maybe

be using the tools that you were, that you were building.

Wouldn’t the Antichrist be like,

great we’re not going to have any more technological

progress.

But I really like what Palantir

has done so far right.

I mean, isn’t that a concern.

Wouldn’t that be the irony of history would be that the man

publicly worrying about the Antichrist accidentally

hastens his or her arrival.

There look, there are all these different scenarios.

I obviously don’t think that that’s what I’m doing.

I mean, to be clear, I don’t think that’s I don’t think

that’s what you’re doing either.

I’m just interested in how you get to a world willing

to submit to permanent authoritarian rule.

Well, but again, there are these different gradations

of this we can describe.

But is this so preposterous, what I’ve just told you,

as a broad account of the stagnation that the entire

world has submitted for 50 years to peace and safety.

This is a first Thessalonians 5’ 3.

The slogan of the anti-christ is peace and safety.

And we’ve submitted to.

The FDA regulates not just drugs in the US, but facto

in the whole world.

Because the rest of the world defers to the FDA.

The Nuclear Regulatory commission

effectively regulates nuclear power plants

all over the world.

People you can’t design a modular nuclear reactor

and just build it in Argentina.

They won’t trust the Argentinian regulators.

They’re going to defer to the US.

And so it is at least it’s at least a question about why

we’ve had 50 years of stagnation.

And one answer is we ran out of ideas.

The other answer is that something happened culturally

where it wasn’t allowed.

And then the cultural answer can be a bottom up answer,

that it was just some transformation of humanity

into the more docile kind of a species,

or it can be at least partially top down

that there is this machinery of government that got changed

into this stagnation thing.

I think something like this nuclear power was supposed

to be the power of the 21st century.

And it somehow has gotten off, ramped

all over the world on a worldwide basis.

So in a sense, we’re already living under a moderate rule

of the Antichrist in that telling,

what do you think God is in control of history.

I mean, this is again A.I. think there’s always room for human

freedom and human choice.

These things are or at least where we are today.

These things are they’re not absolutely predetermined one

way or another.

But God wouldn’t leave us forever under the rule

of a mild, moderate Antichrist, right.

That can’t be how the story ends, right.

It’s attributing too much causation to God is always

a problem.

I know there are different Bible verses I can give you,

but I’ll give you John 1525 where Christ says,

they hated me without cause and so as all these people

that are persecuting Christ have no reason,

no cause for why they’re persecuting Christ,

and if we interpret this as a ultimate causation verse,

they want to say, I’m persecuting because God caused

me to do this.

God is causing everything.

And the Christian view is anti-calvinist.

God is not behind history.

God is not causing everything.

If you say God is causing everything, but God is.

But wait.

But God is.

You’re scapegoating God, but God is your scapegoat.

But God is behind Jesus Christ entering history

because God was not going to leave us

in a stagnation as decadent Roman Empire, right.

Well, so at some point, at some point,

no, no, at some point God is going to step in.

I am not, I am not, I am not that Calvinist.

And that’s not Calvinism, though.

That’s just Christianity.

God God will not leave us eternally staring into screens

and being lectured by Greta Thunberg, right.

He will not abandon us to that fate.

It is.

It is.

There is a great, I don’t know,

for better and for worse.

I think there’s a great deal of scope for human action,

for human freedom.

If I thought these things were deterministic,

you might as well maybe just accept it.

The lines are coming.

You should just have some yoga and prayerful meditation

and wait while the lines eat you up.

And I don’t think that’s what you’re supposed to do.

It’s no, I agree with that.

And I think on that note, I’m just trying to be hopeful

and suggesting that in trying to resist the Antichrist using

your human freedom, you should have hope that you’ll succeed.

We can agree on that.

Good Peter Thiel, thank you for joining me.

Thank you.

Loading...

Loading video analysis...