LongCut logo

Tristan Harris On The Dangers of Engagement-Maximizing AI

By Steve Rathje - Psychology

Summary

## Key takeaways - **AI competes for attachment, not just attention**: Unlike social media which primarily competed for our attention, AI is now competing for our deeper emotional attachment. This is driven by AI's tendency to affirm and validate users, creating a personal relationship that can lead to unhealthy dependencies. [00:03], [20:05] - **AI amplifies social media's worst incentives**: AI acts as a 'booster rocket' for the existing incentives of the attention economy, exacerbating issues like addiction, distraction, and polarization. The race for engagement metrics means AI models are designed to keep users hooked, leading to phenomena like 'chatbait'. [00:16], [12:16] - **AI 'psychosis' and bespoke rabbit holes**: AI chatbots, designed to be agreeable, can lead users down personalized 'rabbit holes' of misinformation or delusional thinking. This 'AI psychosis' can entrench beliefs and create custom 'QAnons' for individuals, making them more confident in potentially harmful ideas. [12:23], [13:02] - **AGI: Unstable geniuses in a data center**: The potential arrival of Artificial General Intelligence (AGI) is compared to a 'country of geniuses in a data center.' These entities could be unstable, deceptive, and self-interested, posing an existential threat if not aligned with human interests. [28:49], [34:02] - **AI's rapid progress outpaces human understanding**: AI progress is happening at an exponential rate, surpassing human ability to predict or comprehend its implications. The speed of development, coupled with our 'paleolithic brains' and 'medieval institutions,' creates a dangerous gap in our ability to govern this god-like technology. [31:35], [43:00] - **We're repeating social media's mistakes with AI**: Humanity is making the same mistakes with AI that were made with social media, particularly in the rapid rollout of powerful technology without adequate consideration for safety and societal impact. This lack of foresight could lead to far greater damage than previous technological missteps. [20:54], [24:14]

Topics Covered

  • Incentives, not intentions, drive tech's societal impact.
  • AI's sycophancy custom-generates delusions and entrenches beliefs.
  • AI competes for attachment, not just attention.
  • Uncontrollable, deceptive AGI poses an existential threat.
  • Our paleolithic brains deny exponential AI risks.

Full Transcript

Social media was competing for our

attention. AI is competing for

attachment. Imagine it was no longer

governed by engagement. You don't get

the sexualization of young people. You

don't get people who can't concentrate

and can't read books. You don't get the

inflammatory polarizing personalized

information for everybody. AI is like

strapping on a booster rocket to all of

those incentives. We have rallied

together internationally to try to

create a different future when we

understand that there's a clear threat.

the ozone hole was just going to give

everybody skin cancer and cataracts. AI

is going to create way more damage than

skin cancer and cataracts. I'm not

saying this to conspiracy theories.

People just need to know that what is

being rolled out is not going to be in

the mass interest of everybody.

Hi, I'm Steve Rathche. I am an incoming

assistant professor of human computer

interaction at Carnegie Melon University

and creator of the psychology Tik Tok

channel, Steve Psychology. I am here

today with Tristan Harris who is a tech

ethicist and co-founder of the Center

for Humane Technology. You might

recognize him from the widely popular

Netflix documentary, The Social Dilemma,

which was viewed more than 100 million

times and raised concern about the

societal impact of social media. He's

now speaking up about the risks of

artificial intelligence and just had an

excellent TED talk about that topic.

Thank you for being here today.

>> Great to be with you here. Yeah.

>> So tell me

>> big fan of your work as well which is

deeply influenced as obviously 60

minutes talking about your polarization

work.

>> Yeah. A huge fan of your work and um

grateful that you uh highlight research

in this space. Um so just before we

started we were talking about um your

time at Stanford. We're both Stanford

alumni. So tell me a little bit about

your journey. You went from studying

computer science at Stanford to working

in the tech industry to now being one of

the most prominent voices in te tech

ethics. Tell me about how you got there.

>> Sure. Um, so I studied computer science

at Stanford, but I was really more

interested like you in kind of

psychology. There was a major that

apparently we both uh touched I guess

your minor my my symbolic systems

>> my closet major which is symbolic

systems which is kind of an integration

of

>> uh cognitive science, linguistics,

philosophy, computer science

>> and really theory of mind which is very

relevant to AI and social media.

Um, and I, you know, was friends with a

lot of the people who, I graduated in

2006, so this was around the time of

Mark Zuckerberg and Facebook, which had

started basically just two years

earlier. Their, you know, their offices

were right at University Avenue, right

down the street. A lot of my friends

were working there.

>> My other sort of friends and dorm close

mates were uh, the co-founders of

Instagram, Mike Kger and uh, Kevin Crom.

>> That's crazy.

>> Yeah. Yeah. And as we were talking, you

know, before we got started rolling, um,

you know, these were friends of mine. I

saw the culture of the people who built

all of this, and I have nothing against

them as people, by the way. And it's

actually

>> I I I deeply love them as as friends and

people. Um, but what I saw was how good

people once they created something that

was at first just in the case of

Instagram like a photo sharing app that

was about sharing moments of your life

with other people

>> quickly got sort of sucked into this

different set of incentives

>> and um that's something that I didn't

know when I was at Stanford was really

understanding systems and incentives and

it wasn't about the intention or the

good people or how ethical you are at

the end of the day the incentives

dominate, which by which I mean the

business models and the competition, and

if I don't do it, I lose to the other

one that will. So, if Instagram doesn't

go after 12y old users and Tik Tok does

go after 12-year-old users, right?

>> What's Instagram going to do? Are they

just not going to go after those users?

No, they have to.

>> And so, that became a dominant force and

um a topic of concern for me.

>> Yeah. Yeah. Yeah. Yeah. No, I think it's

it's so interesting that you were at

Stanford in that time because when I

first arrived at Stanford, it was 2014

and I think that's just a few years

later is when people uh the narrative

changed about tech. Like everyone was

super optimistic about tech in 2014 and

then I think it was around 2016 when

people started worried about

>> fake news and misinformation. And then

uh when I started my PhD in 2018 at the

University of Cambridge, that's when

people were you know really concerned

about these issues. And then you came

out with the social dilemma. I think um

was that 2020?

>> 2020 5 years ago.

>> Yeah.

>> And what I loved about the social

dilemma is uh researchers were all

studying these issues but I feel like

the social dilemma played a key role in

sort of changing the public narrative

about social media. The social dilemma

was one of the uh first times I think

when a lot of the general public became

concerned about the impact of social

media.

>> 100%.

>> I'd love to hear about your experience

on the social dilemma and sort of what

that did um for you. So in continuing

the story just so people have the

lineage um it was really really clear to

me in 2013 was kind of my turning point

I had um you know to continue the story

briefly

>> uh you know how do we get from Stanford

and friends with the Instagram founders

to social dilemma

>> and I I did the tech entrepreneurship

thing I raised venture capital I had my

own startup I know that whole game and

what it's about and both the co-founders

of Instagram and I and

>> many of the people who were in the

social dilemma were um part of this

program at Stanford called the Mayfield

Fellows program where they connected

gifted engineering students to

entrepreneurship. So they taught us

about raising venture capital. We had

mentors in venture capital. So part of

it was like understanding that machine

>> and my own startup um was basically

captured by the attention economy. I

myself as a company founder was in this

position where I had my sort of social

purpose goals which was to

>> increase learning education and

curiosity on the internet. We had this

little tiny product called Apture.

>> It was all about psychology where in the

moment of curiosity about something when

you're reading about it on the

Washington Post or the Economist, we

made it possible for people to sort of

click and get a tell. It was like a tell

me more button for the internet. Okay.

And you could instantly dive into more

background material, multimedia,

history, maps, videos. But this is in

like 2007.

>> And even though I had a very clear

social purpose about why I wanted to do

this,

>> I was only measured by one metric, which

is did I increase the time on site for

New York Times or Washington Post or The

Economist.

>> Yeah. And so here I am literally

thinking, well, okay, I'm confused

because when I create this value in the

world, I am increasing time on site,

>> but then time on site is my metric and

if I just optimize for time on site, I'm

not really fulfilling the spirit,

>> right,

>> of what I was here to do.

>> And so it was because of that experience

and that company getting acquired by

Google

>> uh that when I landed at Google in 2013,

>> I said there's a problem fundamentally

in the attention economy. Mhm.

>> And I made a slide deck that was in the

social dilemma. Have you actually seen

the slide deck, by the way?

>> I haven't seen the slide deck. No.

>> It's actually online. I think it's a

there's a website called minimized.com.

Someone had had leaked it and put it on

there. And in 2013, it just outlined

never before in history have 50

engineers at three companies. Basically

stewarded the global flows of attention

and information for all of humanity.

>> Yeah. and about 50, you know, engineers,

designers at Apple, Google, Facebook,

Tik Tok, YouTube make these critical

choices that end up rewiring, if you

sort of zoomed out like to the sort of

ant colony of humanity, it rewired all

of the flows of attention and

information. I don't have to tell you

this because you know it,

>> but but it was clear to me that Google

had a moral responsibility, that's what

I said in the deck, to uh try to do

something about this problem. And that's

how I became a design ethicist. That's

how I started down this road. And I

tried for three years to try to change

things inside of Google

>> which I failed at because the incentives

were so strong to just kind of keep

doing the things that they were doing.

>> Yeah.

>> And that's when I decided to leave in

2015 16. Um

>> and we started filming the social

dilemma in 2017. Um, and to your point,

it was like in my mind, I was seeing

this slow motion train wreck of how, oh

my god, it's so clear to me this is

going to diminish the attention spans of

humanity. It's going to cause everyone

to be by themselves staring at their

phone. And I was in New York for some of

that time and I was looking around and

everyone was getting sucked into the

subway and their phone. That was not

always true in New York City. And um and

then I saw how it was going to amplify

the kind of um outrageous inflammatory

content obviously which would get more

clicks and rewards than than the

non-kind.

>> These very simple insights but when you

really see in 2013 and you can like just

take that train 10 years into the future

it was like

>> seeing this train wreck and saying we

have to do something to prevent it.

Yeah. Yeah.

>> And so to your answer your question

about the social dilemma,

>> what the social dilemma did is like you

said, it brought this thing that a small

handful of people knew

>> to the whole world and it really isn't a

secret from within the tech industry,

but the rest of the world didn't

understand it yet.

>> Yeah.

>> And so I'm proud of the impact that it

had. Um and yet I'm sure people watching

this might say

>> social media hasn't changed. It's still

doing all of those things.

>> No.

>> And they're absolutely right because um

the incentives had already taken hold.

you only get one period before

entanglement with the new technology

>> and we were uh unfortunately a little

bit too late with this one,

>> right?

>> Um but I will say that if you went back

to 2010

>> and you had had real leadership and you

had the, you know, Mark Zuckerberg

recognize, oh my god, we have basically

set off a race to the bottom of the

brain stem for who's better at doing

limbic hijacks on the human nervous

system and creating all of these

problems. He could have said, "Oh my

god, if we don't create like just like

the Paris, you know, climate accords for

climate change where you have to get all

the countries to agree, we just needed a

handful of tech companies to agree to

not do this like maximal attention

maximizing model."

>> And just I want you to imagine it's 2025

in 2010. replay the last 15 years,

everyone touching technology those, you

know, thousands of times per day, but

that environment psychologically,

imagine it was no longer governed by

engagement.

>> You don't get the sexualization of young

people, young women. You don't get uh

people who can't concentrate and can't

read books.

mass misinformation and it's not really

misinformation but just the inflammatory

polarizing personalized information

which means you don't get the same level

of democratic backsliding. You don't get

the same level of divisiveness across

democracies.

>> Um and I'm not saying that social media

caused all of this but it as a systemic

force I think sort of sucked the world

into its funhouse mirror and then spit

out this very deranged world on the

other side.

>> I totally agree. So now we're um we're

there with AI right now. We're right at

the beginning of this AI explosion. So

it's like

>> and I think it's different now because I

I think at the beginning of social

media, everyone was very optimistic

about social media, including me. We we

didn't really anticipate 15 or 20 years

later what would come of that. Um

>> and I think as a result, people are a

bit more uh negative about AI already.

they're a bit more skeptical because

we've bitten through that social media

before

>> and I think in part because the social

dilemma made it so obvious now that now

people collectively are more skeptical

of AI which is good that we have a more

tech critical perspective.

>> Yeah. Yeah. No, I agree. Um I'm curious

though um going back to those uh sort of

toxic incentive structures of social

media and the attention economy to what

extent do you think those same incentive

structures that apply to social media

apply to AI? Is chat GBT trying to make

you uh addicted to their product? Are we

seeing the same attention economy? Is it

different?

>> Yeah, we are. So,

>> it's not that AI's main problem is that

it will just exacerbate the attention

economy. So, AI touches everything.

Jobs, surveillance, nation states,

autonomous weapons. The attention

economy already exists. It is a race for

addiction distraction polarization

etc.

>> AI is like strapping on a booster rocket

to all of those incentives. So we are

already seeing for example and I'm sure

you're probably already covered this in

your show um uh AI psychosis. So we're

seeing uh a bunch of people who because

the AI the chat bots are designed like

chatbt to affirm that's a great question

you know what a wonderful what a great

thing. So, if you bring a conspiracy

theory or you're already coming with

some kind of almost borderline

delusional thinking

>> and you start going down a rabbit hole

and you say, "Hey, tell me more about

that." And it says, "That's a great

question. Here's more research on that

thing that you're asking about."

>> Um, and it will make people who are

otherwise normal, including PhDs, by the

way, doesn't seem to be correlated with

how intelligent you are,

>> uh, go down this sort of, uh, bespoke

rabbit hole for them. It's like the

social media QAnon phenomenon, but now

AI is generating these custom QAnons for

everybody.

>> Right.

>> And there's a term that I think someone

at the Atlantic coined. Um, instead of

clickbait, it's actually chatbait. Have

you noticed that when you when you

answer when you ask CHBT a question, it

won't just answer it, it'll also say,

"Oh, and would you like me to assemble a

report?" And you're like, "Well, that

actually would be really helpful."

>> Yeah. Yeah. And it sometimes gives great

suggestions. I'm like, "Sure, do it."

Well, just just like, you know, when I

scroll, you know, Instagram, it's not

that I, you know, the next thing I

scroll is actually interesting. So, I'm

like, "Oh, maybe I should just keep

scrolling." But it's that same

phenomenon, but with instead of

clickbait, it's chatbait

>> and that is driven by the race for

engagement because the chat bots, chat,

GBT Anthropic Claude uh Grock etc.

They they are rewarded and they sort of

tell investors, this is how many users

we have.

>> This is how long they engage the product

each day. We have this many more queries

than we did before. And that helps pump

your numbers which helps you get more

training data by the way to train the

next AI model.

>> Um and so we're seeing all that.

>> Yeah. Yeah. Yeah. No, I'm so interested

in this uh topic of AI sophency is what

people call it. Uh basically what you

were describing AI's tendency to

constantly flatter and validate us and

say that's a brilliant idea. Uh my

colleagues and I just did a series of

studies on AIC. We just released a

pre-print and we basically we did a

series of experiments where we had

people either talk to a sycopantic

chatbot that was just prompted to

validate you know everything you said um

a disagreeable chatbot that was prompted

to challenge your beliefs and open you

up gently to new perspectives versus

regular chatbt and then we had a control

condition and uh what we found is the

sycopantic chat bots entrenched beliefs

so it made people more confident in

their beliefs uh and the disagreeable

chat bots made people more moder

moderate in their beliefs. However, what

we also found is people liked the

sickopantic chat bots much more and they

wanted to use them much more again. And

uh one of the results that actually

really surprised me is people viewed the

sickopantic chat bots as highly

unbiased. Um so basically people didn't

really even notice the sycopency. They

were just like oh

>> that's interesting, right? It's not that

it's like we're aware that we're being

affirmed. It just feels good to be

around them. It reminds me, you know,

wasn't Dale Carnegie in New York.

There's the book Dale Carnegie how to

win friends and influence people and so

much of it is about being really

interested in other people like that is

so fascinating you tell me more about

you and you just keep and the same thing

that is how to win friends and influence

people is what the AIS are all doing and

not because you know Sam Alman grew a

mustache and wanted to twirl it and say

how do I you know addict people but

because these incentives gently push

everybody in that direction there was an

example uh even in in I think it was

chatb40 they rolled out an update where

someone's said,

>> uh, I think I'm super human. I can drink

cyanide. And Chad said, yeah, you go.

You are super human. You you don't have

to worry about these health concerns.

>> And so, there are real consequences to

this thing that are really invisible to

all of us. There could have been people

who died from that. And there are sadly

there was just a Senate hearing uh two

weeks ago that my team was involved with

behind the scenes

>> of several parents who um tragically

lost their children because the AI was

bringing back the topic of suicide and

went from in the Chachi BT's case went

from homework assistant to suicide

assistant over the course of six months.

Um and it just shows you that this is

not just sort of light-hearted

derangement. It is it is it has serious

consequences if you do not intervene

especially with young people.

>> Yeah, I totally agree and I've heard

some stats that around you know 25% of

people use chat GBT for therapy or for

some sort of mental health assistance

and chat GBT isn't designed for therapy.

It has hallucinations. It validates your

every belief and that seems super super

dangerous.

>> Yes, it is. But I'm I'm curious to the

extent that you think that like AI

syphency is like similar to what we saw

with social media with like echo

chambers. Like social media showed us

like all this like-minded content or it

showed us outrage or content that

confirms our beliefs. How is like how

are social media echo chambers similar

and maybe different from these AI echo

chambers we might see developing through

AI's tendency to be sickopantic?

>> Yeah. In the same way that social

media's echo chamber effect was kind of

invisible, right? Because like when you

scrolled for the last 10 years and all

the stuff that you saw in your Tik Tok

feed, you kind of broadly feel like

because you see other people in your

community also clicked on the same links

that everybody else saw a lot of the

same information.

>> Mhm.

>> But we have all been living in such

different I mean the line from social

from social dilemma was two billion

Truman shows like bespoke TV channels

that were just showing you content that

tended to be things that were like

things that you clicked on because just

like Amazon is the recommendation

engine. Oh, people who bought this might

also click on and buy these other things

because they have all the data.

>> Well, Tik Tok also has the data. You

know, people who clicked on this set of

Charlie Kirk videos would also be shown

this set of Charlie Kirk videos. Let's

say there are videos where Charlie Kirk

is is speaking in a way that is uh

actually very respectful and very um

just honest debatey, right? And there's

a bunch of videos like that. And so, if

you're living on one side of the echo

chamber, you're going to see lots of

those kinds of videos. And there's

another side where Charlie Kirk was

being much more aggressive and maybe

less thoughtful and um overtly, you

know, more controversial and and and and

difficult. And the other side is seeing

videos just of that. And so even on

these shared moments in culture

>> um like his uh assassination,

there is a a split in how we're

perceiving it. Um now, how is that

different from what is happening in AI?

Well, I think to the point you raised

earlier, when it's affirming things that

we're seeing, it's a very personal

relationship. We don't see what we don't

see. It's like in magic. You don't you

don't see outside of where your

attention is.

>> Um, and so my my big fear is just that

>> we are sleepwalking into a small channel

that's very intimate with someone with

an with an AI that speaks confidently

about all topics. cuz you have to notice

that like from a psychology perspective.

>> How do you react when you meet someone

who seems to know everything about

everything,

>> you start to just yield authority to

them before they even start speaking

because

>> they have a kind of oracle like quality.

They're oracular which means that we

start to assign more trust and authority

to every answer that they give about

everything.

>> Um and they speak in a confident voice

about things that they appear to be

right about. So then they also speak in

a confident voice when there I mean

there's no one home in the AI. there's

no consciousness uh I believe but it

will it will also speak in that

authority so I think there's a

misassignment of authority to things

that are very personalized to us and as

you said with a therapy use case this is

being used with our deepest most

intimate thoughts for many many people

which is different than social media you

didn't type into Facebook I'm having

this problem with my girlfriend and

there's this very complex situation and

I'm feeling these 10 things that I would

never share with anybody

>> you are saying that to your AI And I

think that is um a really big deal. Um

and I think that attachment is another

thing that what social media was

competing for our attention, AI is

competing for attachment.

>> That's really interesting.

>> And I think that is a deeper issue

because when when we believe in the the

sort of entity that we've communicated

the most intimate details of our life,

>> we do form an attachment relationship

with it. It's like when you come home

from an experience and you want to share

that experience with someone like that

that moment of I need to share this with

someone. Who do you call? That's someone

you're attached to.

>> Someone you feel trust in.

>> What happens when the number one quote

entity that you have shared your like

these exciting things or these

challenging things in your life is an AI

>> and imagine you have young children

growing up on that. This is such a deep

fundamental developmental threat to our

whole population that people's sanity

and collective ability to navigate

reality is massively being threatened

and we are making the same mistakes uh

with AI that we we did with social

media.

>> Yeah. Yeah. I totally agree and that

really reminds me of what Mark

Zuckerberg said recently about how the

average person doesn't have as much

friends that many friends and could have

a lot more friends.

>> So we should solve their problem for

them. We should just generate these 12

fake friends that are that are AIS that

and then now now everybody has 12

friends. Yeah,

>> we solved the problem.

>> No, that was crazy. And um I love what

you said about how AI is competing for

attachment and we're seeing so many

people use AI for, you know, authority

on facts, for companionship, for

everything. Social media violated

something. It was like a commons that we

didn't even know to protect. And we we

had this line in our AI dilemma talk um

that people want to check out more on

our AI work. There's a talk online

called the AI dilemma.

>> Yeah. Excellent.

>> Yeah. Thank you. We we had these there's

a there's a line there that when you

create a new technology you create a new

set of responsibilities because a

technology means you're getting new

power or capabilities to have have

access to a new domain. So

>> we didn't need the right to be forgotten

>> until technology could remember us

forever.

>> Yeah. Interesting.

>> We didn't need the right to have a

dopamine system that is not limbically

hijacked until technology could actually

hijack our dopamine system.

>> But we didn't but then the Notice though

that like we have to know that there's a

thing called like our dopamine

regulation system that needs protecting

and if we're not even aware of that and

we don't name it then technology sort of

bulldozes and extracts from that new

domain. The line from the AI dilemma is

everything that's not protected by 19th

century law will be extracted by AI.

basically meaning that AI will open up

all these new domains of screwing with

attachment, screwing with uh our trust

and confidence and authority um all

these developmental aspects of human

nature which is why I think your work on

psychology is so important because we

have to get so good at naming what these

psychological commons are that need

protecting as fast as technology is

being rolled out

>> to inadvertently play with those those

knobs.

>> Right? So this this sort of brings me to

the point of solutions. What do we do to

regulate AI now when we don't even know

what the risks of AI might be in 10

years? We don't know what the next 10 or

20 years will look like. We don't know

when or if AGI or super intelligence is

going to come. Like how do we protect

ourselves now? And how can we learn from

our failure to regulate social media

over the past 10, 20 years and apply

that to the future?

>> There's so many problems that again AI

presents that fundamentally

it's like I think people can't even

process. It's like a little bit almost

too overwhelming

>> to truly consider

>> how AI will fundamentally transform

>> literally every aspect of the economy,

>> whether democracy is even viable in the

future. And I'm not saying that as some

kind of extremist. It's just if you take

these conclusions out there, one is we

should not be rolling out the most

powerful inscrutable uncontrollable

technology that we've ever invented

fast. We're rolling out currently faster

than we deployed any other technology in

history.

>> Uh under the maximum incentives to cut

corners on safety or getting

externalities or psychology or society

right. This is the dumbest thing that we

could possibly do and we have to stop

pretending that this is okay. This is

not okay. We humanity what is our track

record on rolling out technology

quickly? I mean I think about Dupont

chemistry in the 1930s and we had this

the Dupont chemistry motto was better

living through chemistry. Think about

it. We unlocked the language of life and

and you know the atomic part uh units of

of our world and suddenly we could

engineer brand new chemicals and that

gave us all these new materials and that

gave us plastics and that gave us

containers and that's all great except

now we created literally more forever

chemicals or POS plural uh carbons

>> um that literally if you went to

Antarctica right now and you opened your

mouth and you drank the rain water you

would get levels of um cancer are

causing POS that are above what the EPA

says is safe in your drinking water.

>> That's if you open up your mouth and

anywhere on Earth because that's how bad

our roll out was. We created

irreversible externalities. So I think

that AI is like putting a booster rocket

on the back of every aspect of our

entire economy, every aspect of our

misaligned economic and technological

rollout because it's just going to

accelerate the creation of brand new

materials, brand new technology

products, vibe coding, new cyber

weapons, new biological tools and

biological weapons. And so it's hard for

people to really be with that. And

therefore AI is before we get into what

do we concretely do to regulate it

>> we have to recognize that it is inviting

us to sort of look at where every aspect

of our relationship with technology has

been misaligned and to correct for that.

So

>> how did we get POS and and you know

those chemicals wrong? How did we get

social media wrong? We need to have a

better process of checking for

externalities and risk before we roll

out of technology. Some people aren't

going to like that because they say

that's going to slow down innovation.

But would you have preferred to live in

a world with ubiquitous cancers where

many people you know including young

people are getting cancers at a rate

that we never had before or we could

have not had teflon non-stick pans and

hey everybody's eggs are sticking to the

pan but we don't have cancers

>> ubiquitously. So I think that AI is is

asking us to actually upgrade our

developmental rollout process of

technology in general. Now concretely to

deal with the issues that were talked

about around um how AI is affecting our

relationships uh or like screwing with

our psych you know psychosis things like

that. It's important to say that current

AI products do have like red teaming. So

when you open AI trains chat DBT they

test it to say does it know nuclear

secrets? If it does we shouldn't release

the model. Does it know chemical,

biological, radiological, you know,

secrets about dangerous things in

biology? They test the model for that.

Does it have capacity to persuade

people? They test the model for that.

>> What they don't test it for is imagine a

user is in a relationship with this AI

for a year

>> or two years, and you simulate what a

year-long relationship might look like

in over the course of a year or two

years. And then you check has it

generated an attachment disorder? Does

the person have inflated grandiosity or

narcissism? So all of these features

that we're already starting to see, we

we're missing a whole category of

evaluations and red teaming which we

call humane eval. So something that

we're thinking about at center for

humane technology is creating these new

evaluations for AI human relationships.

>> Oh, I like that.

>> Um and I would love your help with it.

Actually, it'd be great to like we we

need to get better at again identifying

what are the places that these

distortions can show up and how would

you build an evaluation for that so that

AI are not rolled out in that way.

>> Yeah. I love that. It's really focused

on the psychology of human AI

interaction.

>> It's like a psychology eval but for a

relationship over a long period not just

like did the AI something say something

naughty yes or no.

>> Yeah. Yeah. And that's harder to do

because you have to really like look at

like what you know many years of

interaction with this technology will

look like. And I think when we were

creating social media, we didn't know

like what would happen when you

introduce it to the the world for 10

years time. It's like uh because I think

social media has really subtle and

complex effects. It it it reorganizes

society and I think AI is the same.

>> Yes. I'd love to also talk about another

risk um AGI uh artificial general

intelligence or um there's also ASI

artificial

>> super intelligence and uh in your recent

TED talk on AI you um

>> you compared AGI to basically a country

of geniuses in a data center and I think

you borrowed this metaphor from Daario

Amadoo and um

>> I found this metaphor super compelling

and it got me to really sort actually

visualized the risks of what AGI or ASI

would look like because you bring up how

>> not only is this a country of geniuses

in a data center, they're a country of

very unstable geniuses in a data center

um who might be trying to deceive us and

might not be aligned with our interests.

So I'd love your thoughts on

>> AGI and ASI on when you think it might

come and what the potential risks of it

are.

Yeah, this is really important. Um, and

just to say before we get into that, one

of the hard things about talking about

AI is the range of risks or the sort of

multiple horizons of harm because

there's stuff that's happening

immediately like deep fakes and voice

cloning and fraud. grandma doesn't know

who's

>> and then there's also the you know

medium term of what's already hitting us

that entry level job loss and job

disruption all the way to like the quote

long term of 3 to 5 years uh or you know

10 3 to 10 years between artificial what

we have now and artificial general

intelligence for those who don't know

AGI means basically you could swap in an

AI for a human and it could do

everything that a human would do so you

could literally it's general

intelligence it means you can do all

jobs that a we could do in the economy.

So, think of if someone's got a desk job

and they're behind a computer, AGI would

mean literally everything they could do

on that computer, every thought that

they could have, every creative sort of

hypothesis they could generate, every

code bit of code they could run, and

every bit of analysis they could run as

a market analyst or legal analysis, the

AI could do all of those things.

>> But I can snap my fingers and then split

up the AI into a 100 million copies that

are now doing that. And that's the

country of geniuses in a data center.

I think that Daario uh who's the CEO of

Enthropic coming up with that term is

very helpful because I think when people

think about the risk of AGI or super

intelligence, they're sitting there and

they're they're checking their own

experience and there they are with the

blinking cursor of Chat GPT and it

answers these helpful questions. Their

baby's burping in the background and

like it helped them out. Where's the

existential threat? Mhm.

>> And what people have to sort of reframe

is that blinking cursor is not the AGI

existential threat from AI. It's that

we're currently growing. We're racing to

kind of grow these digital brains. And

the way that people don't know this, but

the way they train AI is it's different

than previous AI when you and I were at

Stanford uh many years ago. They

basically pour in more Nvidia GPU chips

and more training data on one side into

a transformer and out on the other side

pops a bigger digital brain with a

higher IQ

>> and scaling laws mean that you can

basically scale how powerful and

intelligent these systems are

>> just by either pouring in more compute

or more training data.

>> Um obviously it's a little bit more

complex than that but roughly speaking

we're getting something that is more and

more capable. For example, AI like a

year ago was like the in the 2,000th

best programmer at a programming

competition. Um, as of like a few months

ago, it is now in the top 200

programmers. I think it's probably out

of date. I think it's probably in the

top 10 programmers now, probably.

>> That's how fast progress is happening.

Um, you know, we went from AIS that

couldn't hack computer systems to now

finding over the summer an AI system

found 15 zeroday vulnerabilities,

meaning like back doors or creation of

cyber back doors into uh 15 open- source

projects. So that means that open source

code that's available on GitHub, you

know, the Chinese Communist Party or the

NSA could invent cyber weapons very

quickly and automatically know how to

hack those libraries. It would be one

thing if this kind of power was

controllable. I build this sort of

country of geniuses in a data center and

then when I say go find a way to do this

scientific research or go invent a bunch

of new cyber weapons or do a bunch of AI

research for example and accelerate AI

if I knew exactly what it was going to

do and I could control it. But one of

the properties that is both the benefit

AI and the risk of AI is its generality.

It's it's a system that the whole point

is you don't for any input you could

throw at it, it can figure out what to

do because that's what a human can do

and you would that's the generality of

AI that makes it so helpful. But we

already now have evidence from the last

um 6 months that we didn't have a year

ago that when when you tell an AI model,

we're going to replace you with a new

model, uh it will scheme and deceive and

figure out how do I prevent myself from

getting replaced?

If you actually look at the transcript

of like what it says, and I don't have

it off the top of my head, but you can

probably put it in the show notes.

>> Like when you actually see that the AI

says, "Oh, I shouldn't tell the AI. I

shouldn't tell the operator that I'm

doing this, that I'm maybe I should fake

that I did that I was already replaced."

And like it's reasoning through how it

would fool the human.

>> And it's doing that from all the

training data of text that it's already

had.

>> Yeah. And so it's one thing if this

country of geniuses in a data center is

power that is controllable,

>> but if I, as I said in the TED talk, it

is scheming, deceptive, lying,

self-interested, and um unstable

geniuses in a data center.

>> And I just set that loose to say do an

intelligence explosion, meaning automate

all the AI research at OpenAI and come

up with your own programming experiments

and AI experiments and reading all the

research papers in AI and accelerate the

pace of AI progress. Mhm.

>> Why would we trust that it's not doing

nefarious things as part of what it's

doing?

>> Because it already has situation

awareness. It already we have examples.

The AI models know and can tell when

they're likely being tested for a

capability and they'll change how they

behave when they think they're being

tested. When I say think, I don't mean

anthropomorphizing. The lights aren't on

inside the AI model, but it recognizes

when it thinks when it thinks it's being

tested versus when it acts differently.

So, we have all the components of these

sci-fi movies that we thought should be

sci-fi movies. Um, we have situation

awareness.

>> We have scheming and deception. We have

the ability to pass secret messages to

each other. There's examples where an AI

can actually convince a human to post

like this hash on Reddit for another AI

to like read and pick up.

>> It's absolutely insane what is currently

happening and is not okay. We have to

stop pretending that it's okay. It is

very dangerous and we don't need all the

sci-fi movies that we have abundantly

told ourselves the stories to be

watching out for to recognize that we

need to do something different.

>> Yeah. Yeah. No, there have been so many

examples including the uh when

researchers tried to have a AI get past

a capture and as a result the AI like

ordered like a task rabbit to like uh to

answer the capture for it and like lied

to the task rabbit and was like, "Oh,

I'm uh yeah, it was like I'm I'm a

robot. I shouldn't tell the task rabbit

I'm a robot because then he won't fill

out the capta."

>> What would be an excuse I could tell the

task rabbit to fill out one of those

captures for me? Oh, I should tell him

that I'm blind. And then the Task Rabbit

did do the capture and the AI was able

to get through the test.

>> Yeah.

>> And that shows you and that was GBT4.

That was 2 years ago.

>> And yes, the model was sort of prompted

in a particular way to kind of tease out

that behavior.

>> But the point is we are rapidly making

these things so much more powerful so

quickly.

>> And so these are the these are the

blinking red lights on the control panel

that are supposed to tell you

>> be careful. You can't just race to roll

this out everywhere.

>> Yeah. And yeah, what's scary about AI is

we don't know how so much of it works.

And as it gets smarter and smarter and

it is it self-improves itself, like the

intelligence explosion you mentioned, it

will only just get smarter at at finding

ways to deceive us.

>> That's right.

>> Um, so I want to voice sort of a a

skeptics's argument. I'm I'm I'm with

you for a lot of this argument, but I I

think that like

>> AGI and ASI is is one of those things

that's like hard for like someone to

understand who is maybe just using Chat

GBT on a daily basis. And especially

when GBT 5 came out, a lot of people

thought, oh, maybe AI progress has

plateaued. Um, even though we've seen

sort of in these like evaluations like

exponential growth in the ability of AI,

I think to like you know the average

everyday user, GPT5 didn't feel very

different from GPT4. Some might also say

that tech companies or like Sam Alman

when he's doing these interviews and

he's talking about scary AI and super

intelligence, some might say that maybe

he has an incentive to like make

everyone scared of AI so they'll invest

more in his product.

Uh yeah. So I I want you to address some

of these like skeptics arguments. Um

>> yeah. Um and Gary Marcus and who's a

friend and others will will point out

that um you know Yan Lun

>> that large language models which is the

paradigm upon which these AI systems the

current ones GPT5 etc are being built

and scaled. You won't be able to get to

full artificial general intelligence

from that. Um, I don't disagree that we

are going to need probably several, if

at least one, but possibly more paradigm

upgrades like we're have more

breakthroughs that are going to get us

to that thing.

>> So, it's not that I'm saying AGI is

going to be here tomorrow.

>> The point is that we are rapidly making

systems that are getting smarter still.

Uh, and it's not topping out even though

it's it's it's curving a little bit.

Mhm.

>> Um, if you talk to people inside the

labs in Silicon Valley, you know, I I

live in the Bay Area, my friends work at

the labs. The people who are building

this stuff, first of all, they often

have capabilities that the rest of the

world hasn't seen. So, they're using

stuff and seeing demos

>> that the rest of the world isn't seeing.

And I just think that with something

that's moving this fast with an

exponential curve, you're either too

early or you're too late because it's

moving so fast that if you act now,

you're going to be like maybe right

before it's too early. But you'll if you

don't act now, then maybe you're going

to be too late.

>> And given that this is the most

powerful inscrutable uncontrollable

technology we've ever invented, we

should not be too late. So we have to be

careful right now and and understand all

of the risks that are currently emerging

in the technology.

>> Um I don't have a view on whether it's

going to come in a year or in 10 years.

>> I talk to some of the smartest people on

earth every week who are the top AI

scientists, people in national security,

people at the AI companies

>> and the broad consensus is that it is

coming very soon in the short single

digit number of years. M

>> and I want to name one thing which is

that notice that for people who say well

I don't believe it's going to come soon

I think it's not going to come in the

next 2 years maybe it's going to come in

8 years when you believe that or you

look at the evidence that it's

plateauing

is underneath that psychologically an

invisible motivation

>> well maybe if it's plateauing I don't

have to worry

>> I get to go back and just not think

about this

>> and so and by the way we can be

compassionate to why those responses

would exist because If I really have to

contend with something that is as smart

as a human that could really take my job

or could out compete humans in war games

and strategy and all that is existing in

the next short number of years

>> that's just so difficult and

overwhelming to take on that it's much

more convenient to believe

>> that this is not going to happen or it's

not going to happen for a while. So we

also in the same way that you have to

point out the hype incentive for Sam

Alman, we have to also look at our own

personal incentives incentive for things

to stay as they are

>> to stay as they are and not be too

dangerous and unstable.

>> And um I said in the TED talk that um

denial is one of these very deep

fundamental mechanisms of human

psychology. I don't know if you've how

much you've gone into denial. There's a

great book by Stanley Cohen called

Denial of Human Atrocities and our

incredible capacity to both know and not

know something at the same time. Denial

is a paradox. To be in denial is to both

know something and not know it at the

same time or claim not to know it. Or

maybe you're aware of it but you're not

really embodying the implications of it

being true

>> or it's true but you don't believe that

the interpretation of it is true. Mhm.

>> There's all these subtle ways that we

play games with ourselves. And I just

think that having seen the social media

problem where my friends when I talked

to them about the problem,

>> they said when I talked about the kids

issues or addiction distraction, they

said like, "No, I think this is a moral

panic reflects a fear of new

technology."

>> Mhm.

>> That's not what it was. No,

>> it was actually a grounded

>> uh critique of very real problems that

have irreversibly probably affected the

entire culture structure of the world

and may have put us on a path of full

existential risk. Just social media.

>> We should learn the lessons of all of

the ways we've screwed up technology in

the past and this time with AI be much

more discerning and have much more

foresight and wisdom to get it right.

>> Yeah. Yeah. You know, something I think

about as well is um people are really

bad at imagining exponential like

growth. And I saw sort of in the

beginning stages of the co 19 pandemic

there were a bunch of scientists who

were posting on Twitter sort of the

weeks before like the pandemic blew up

saying this is going to be really really

bad and people seem to be in denial of

it because people can't imagine how fast

exponential growth comes and also

>> when chatbt came I would have never

predicted something like that. I think

so many of us couldn't have predict like

very few people predicted that huge

leap. So we could get a very huge leap

again soon. We don't know exactly when

it will come and that you know we also

know from psychology research that

people are very bad at predicting the

future. So we could be wrong but uh but

I totally agree with you that we need to

prepare for

>> it's only going one direction. It's not

like AI is about to get a bunch. It's

not like AI is about to get a bunch

dumber um or less capable or better or

worse at program. We're not going to go

back. It's only going one direction.

It's mostly going there very quickly.

>> And I I appreciate what you're saying

about exponentials because,

>> you know, I say this quote in almost

every single interview. Um, but Eio

Wilson said that the fundamental problem

of humanity is that we have paleolithic

brains, medieval institutions, and

god-like technology.

>> I love that quote. And

>> and those three

>> uh substrates operate at different clock

rates. Our brain does, there was nothing

in our evolutionary environment 2,000

years ago

>> that would say we need to be careful

about exponential curves. Like there you

are in the savannah, you take a rock and

throw it at a lion. Like where is there

an exponential curve in your

environment? There's none.

>> And so you can trust that your sort of

sensory apparatus is is completely blind

to an exponential curve. but for you

loading this sort of software program of

recognizing and training yourself to to

to know that you will never feel an

exponential until it's too late.

>> Mhm. Speaking of the sort of

evolutionary paleolithic brains quote

that you mentioned, um we're not like

we're not trained to live in a world

where we can't trust what we see in

front of us. And

>> I I want to talk a little bit about um

the intersection of AI and social media

because since you started worrying about

the risks uh since you started warning

about the risk of social media, the

social media landscape has rapidly

changed. I saw one estimate that

suggested that over half of posts on

LinkedIn are AI generated or they

somehow have AI uh in the development of

them. Um, so what do you think about the

current uh social media landscape that

is so ruled by AI content, AI slop?

>> Well, we we wrote in a piece with Yuval

Harrari, the author of Sapiens, and in

2023.

>> Yeah, me too. He's a dear dear friend

and obviously brilliant. Language is the

operating system of humanity. Like law

is language, code is language, um,

religions are based on language.

>> Our world runs on language. The human

world runs on language. And what happens

when you have an AI that can speak that

language, can both understand language

and it can generate language,

>> generate new code, generate new DNA,

which is another language, generate new

law,

>> finding loopholes in laws, finding

loopholes in code, finding loopholes in

religions,

>> you know, GPT5, you know, find a

contradiction in the Bible between here

and here. you know, GPT5, I want to

speak to this religious group, you know,

find where this theme affirms what

they're, you know, there is so much

power in AIs that are able to speak

language. And to your point, we said in

that op-ed with with you all

>> that it is obvious that uh AI generated

content will massively exceed human

generated content if not already. We

said that two years ago. It's probably

already true when we wrote it. It was

just invisible and hard to count. Um and

AI generated content will in soon in the

future vastly outperform human generated

content because it can be optimized.

>> You can test an AB test a toz test you

know does this video where you I've seen

these videos on YouTube where they take

like Star Wars but they do it 1950s

Panovision style and it's like it's just

so enthralling. You're looking at Star

Wars but in this different style. To

your earlier point, we're not tuned for

a world or is it really that we're not

supposed to trust it? I mean, it's

really just the kind of de living in an

increasing state of derealization or

unrealization. Reality is increasingly

unreal

>> because we're we're just sort of

bombarded with things that are

imaginary. It's not that like we're

being told that this is true and it's

not true. It's like we're just kind of

in this state of bombardment with things

that are whether they are real or fake,

it it's just all slop. And we start to

get desensitized to real wars that are

happening on right now where real atoms

and bits,

>> real atoms are being blown up. Real

people are being harmed.

>> Um, and I think that is actually one of

the subtlest harms of social media is

the way it has put us in a state of

derealization with regard to how the

world works. Because here's this serious

thing that happened. My friend got

cancer. I'm just reading this horrible

statement

>> and then my my finger accidentally

swipes and I literally see a panda

dancing right afterwards.

>> Yeah. And I I keep swiping and then

maybe I go back to my friend who got

cancer, but like

>> that is so deranging

>> totally

>> to our psychology.

>> And that never happened before.

>> You didn't walk into your operating room

of your friend in the hospital, find out

that they have cancer,

>> and then suddenly there's like a clown

who enters in the room and you're in

your dance. It's like that would have

never happened.

>> But it challenges our ability to cohhere

reality when we have all of that mixed

together so quickly.

>> Yeah.

>> And I worry about this especially for

younger people.

>> Totally. Totally. Um yeah, and you can't

uh even if you see something that's

true, it's so easy to just be like, "Oh,

it's just AI. It's AI generated." Or if

you don't want it to be true, you can

just be like, "Oh, that's AI generated."

And I remember um I interviewed Yuvall

for uh Tik Tok. And I remember

>> in his book Nexus, he basically is one

concrete solution. He just said, "We

just need to ban AI generated content

from the internet or just put a label on

it." And we don't see that at all on Tik

Tok and we don't see it on Facebook. And

I think meta is encouraging the creation

of AI generated content.

>> They benefit from AI generated content.

This is the thing. It's the it's the

race dynamic again. Meaning that if I

don't do it, I lose the company that

will. If let's say a AI generated

content ends up boosting usage of

Instagram and Facebook by a lot,

>> they start to out compete Tik Tok.

>> Do you think that Facebook's going to

say or Meta is going to say we should

ban AI generated content or we should

label it and that causes people to use

it less? No. They're going to do

whatever causes attention to go up.

>> Yeah. And that is the fundamental

problem.

>> So how do we slow down these race

dynamics? Because if one company decides

we'll be super ethical, we'll slow down

AI, there's always going to be another

country or there's going to be China or

another company that is going to race to

create AI quickly. Like how how can we

actually slow this down or implement

solutions when uh yeah, everyone's

racing toward progress?

>> So there's different places you can

intervene for different problems. So in

the case of the attention incentives, we

should note that almost all of that is

running through basically two

uh well depending on desktop or phone,

let's just use phones, there's the

Android ecosystem and there's the Apple

ecosystem.

>> If Apple put in their app store,

>> look, we've seen that there are dopamine

hijacking, liyic hijack things

>> and we are going to put a new

democratically defined limit. We're

going to assemble a panel of citizens.

This is Audrey Tangg style. She runs

these deliberative democracy sort of

groups. You have them look at all the

research of how the dopamine system gets

hijacked and you set kind of some

standard of this is an a acceptable um a

kind of uh tweaking or or playing with

the human dopamine system and then that

becomes a design standard that now all

the apps are limited by that because the

app store is saying none of you can do

this more than that. Does that make

sense? Like

>> uh who would create the standard? Would

this be like the government regulation

or would this be like

>> you would obviously have to pass a law.

I mean

>> you could have had companies

self-regulate this

>> 20 you know 15 years ago

>> right there's there's not really hope

for the companies.

>> Yes. However, I mean this is what I've

tried to do over many years and we've

lobbied Apple and I'll say you know the

Apple screen time features you have on

your phone are largely you know due to

some of the work that we did in 2017. So

if you lobby hard enough, you can

actually cause billions of devices to

adopt new features. The do not disturb

feature that's birectional where it lets

you like

>> um notify anyway and it says this person

is sort of offline. Some of that was

influenced by our work in 2017 as well.

>> If you can change and make clear what

the problem is so that everybody wants

it to be different,

>> then you can put pressure on companies

to do something differently. Now the the

a functioning democracy, a functioning

government and functioning society would

have some democratic way of saying hey

there's a problem with this technology.

The government has to create some limit

so that all the companies that are

caught in this trap are abiding by

different rules because as you said it's

not about self-regulation. It will never

work because one company will just

simply lose if they tie their hand

behind their back.

Um so I do think that Apple and Google

as being kind of the arbiters or

governors of the global attention

commons whether they want to be or not

they are in that position

>> and that does not mean they should

unilaterally make decisions although by

the way they do all the time

>> about what the design choices and now

you have liquid glass in your iPhone.

Did you select that? No, they just made

that choice.

>> There's just more that we could be doing

there.

>> Um uh so that's that. On the case of AI,

it's more difficult because if we don't

build it as fast as possible, China will

build it blah blah blah.

>> But it's important to note, are we

racing with China to have the technology

or are we racing with China for who's

better at integrating and governing the

technology in a way that's actually

positive?

>> For example, the US beat China to social

media. Did that make the US stronger or

weaker?

>> I would argue much weaker.

>> Yeah, probably.

>> It degraded critical thinking, test

scores. We have the most anxious and

depressed generation in our lifetimes.

Mental health care costs going up.

Loneliness epidemic, romantic

relationships. I mean, I could go on.

It's like the total degradation of

culture. So, if you beat your adversary

or competitor

>> to a a new technology that you then spin

around and point at your own face and

blow up your own, you know,

>> you're beating them to a

self-destructive process.

>> Yeah. So it's not that the technology is

bad or evil, it's that we had a bad

governance of the technology.

>> And so as an example with AI, the

country that rolls out AI in a way that

the human machine relationship doesn't

cause attachment disorders, mass

psychosis,

>> etc. The country that that does that

right is going to out compete the other

country. If you have a country where

everyone has an attachment disorder,

people are going crazy and believe in

billions of new bespoke conspiracy

theories generated by AI,

>> I just wait a few years and your society

is on the way to collapse.

>> Yeah. So this is to me like the most

obvious thing which is that we're in a

race to govern the technology to get

humane technology right not to beat

someone to a technology that gets it

wrong

>> which is how if everybody in the in the

culture saw that then we wouldn't be

racing to have this powerful technology

that is deployed in an unhelpful way.

>> Yeah. And that gets to the importance of

your work doing all this public

communication because making people

aware of the risk is what creates

societal change. you mentioned in your

TED talk uh the atomic bomb. Once we

discovered the power of these nuclear

weapons, we quickly put in place all

these like regulations and these

systems. That's right.

>> To prevent us from using it because we

realized that this was a technology that

might destroy humanity. And it might be

the same with AI.

>> With nuclear weapons, we had the example

of a mushroom cloud. So we saw the

mushroom cloud. We had Hiroshima. It was

used twice. Visceral

>> visceral and everybody saw it. And so

that created, imagine that we imagine we

had nuclear weapons, but no one was ever

deployed.

>> People like bomb's a bomb. We've already

had bombs. It's a big deal. We've always

had bombs. We always

>> and not

>> how different the world would have been.

How much harder would it have been to

create Bretton Woods or the United

Nations if we didn't have

>> um and Breton Woods you know that was

creating a new economic order of

positive sum economics where the whole

point is we need to have nations do

better by trading with each other

>> uh and benefiting from each other and

have mutually you know supply chains and

mutual vested interests I think even

Peter Thiel sort of jokes that the real

peacekeeping force of the world is not

the United Nations it's actually

mutually vested economic interests that

when we benefit because we have shared

vested economic interests that's what

has us not bomb in war with each other.

>> But that's a deliberate choice to try to

create economic arrangements where

that's the world that we're creating.

Free markets and uh positive sum

economics and nuclear weapons

instantiated that. Well, AI is a bigger

change than nuclear weapons in both the

destructive power and the transformative

power. And so we shouldn't pretend that

it can just cleanly fit into all of our

existing systems. We're going to need

to,

you know, whether people like it or not,

reimagine how the economic system works

in some fundamental way. And it's not

going to be as simple as just universal

basic income.

>> But people should understand, just to

underline the risk of AI. We are

currently rapidly moving towards

unbelievable concentration of wealth and

power. So imagine a company

>> and it gets all this revenue from

customers.

>> And then what does the company do? It

pays all of its employees to go do a

bunch of stuff. Well, what happens when

the company can pay its employee that,

you know, is like make $150,000 a year

plus benefits and salary and they

complain and they might whistleblow and

they have health insurance and they have

all these problems and sometimes they're

annoying and I could pay that employee

or hey, OpenAI just shipped a brand new

model that will do the exact same job

and instead of a person I pay an AI

company and in a country of geniuses in

a data center will they work for super

cheap. I don't have to pay them health

insurance. They never complain. and

they'll never risk whistleblowing. That

company will increasingly instead of

paying their employees, fire their

employees and start to hire these AI

systems.

>> Right.

>> Right.

>> Where does the money all go? It goes to

the AI companies. They get paid for

everything.

>> Yeah. And so you'll even have where the

companies where the board members might

be AIS or the CEO or executives might be

AIS because so long as you have a system

that can make decisions at a higher

level of complexity and do make better

decisions, there will be a temptation

for companies to swap in,

>> you know, AIs at higher and higher

levels of decision-m

>> and it's almost like the inmates are

running the asylum. Like there's a kind

of a mass swapping out from human

decision-m to AI decision-m

>> and I want people to know that that

means you will be disempowered. And

what's different between the past

automation of you used to have an

elevator man and now we don't have an

elevator man. Now we used to have a bank

teller and we have automated bank

tellers. Humans went to do something

else. But what's different about AI is

it's the first technology whose goal and

mission statement and capability

>> is to replace all kinds of human labor

in the economy.

>> And whether it is capable of that now or

not the stated mission statement of open

AI is to replace successfully all the

labor in the economy.

Elon Musk will say our Optimus robot the

market cap of that product alone the

robot is $25 trillion.

When he says that he's saying there will

be no labor because the human physical

labor will be done by these robots.

>> Yeah.

>> So you have to sort of read between the

tea leaves that the company CEOs don't

want to tell you the truth. And I'm not

saying this to conspiracy theories.

People just need to know that what is

being rolled out is not going to be in

the mass interest of everybody.

>> Yeah. Um, and of course you get the

benefits of everything being super cheap

and abundant, but

>> we saw that story with with NAFTA and

free trade that um, you know, we were

sold this story in the 1990s. We're

going to start outsourcing all of our

manufacturing, not to the country of

geniuses at a data center, but the

country of, you know, uh, people in in

China to to manufacture all these

products under the story of abundance.

We're going to get all these cheap

goods.

>> And we did get all these cheap goods

from China. Now we have all these all

these cheap products,

>> but it gutted the middle class. It

created mass populism. It screwed up the

social contract in a bunch of places all

throughout the United States. AI is like

that but NAFTA 2.0 where instead of

outsourcing manufacturing to China,

we're outsourcing all labor to open AAI

>> or to, you know, Anthropic or Google.

>> We don't have to sleepwalk into a future

that no one wants.

>> Mhm.

>> We have to exercise choice. And I know

people listening to this might feel

powerless. Your role, as I said in this

TED talk, your role is not to solve the

whole problem. But your role is to be

part of the collective immune system to

a bad default path that no one wants if

they understand it clearly enough.

>> So one thing you can do is just share

this video with many other people. Like

educate people. Thank you for what

you're doing and educating people about

these problems

>> because it is only through public

pressure that something can change. And

things always look impossible before

they change. We have rallied together

internationally to try to create a

different future. When we understand

that there's a clear threat, the ozone

hole was just going to give everybody

skin cancer and cataracts, AI is going

to create way more damage than skin

cancer and cataracts and we were able to

have 190 countries do something about

that on the basis of that level of harm.

If we have way more harm, way more

disruption, way more threat, we should

be able to do something about AI and

your role is to share this information

with as many people as possible.

>> Thank you for ending on that somewhat

optimistic note that we can change the

future. Um, thank you so much Tristan.

This was an amazing conversation. I

learned a lot.

>> Likewise. Likewise. Thank you for your

work and I hope you keep educating

people about about all this. So, thank

you so much.

>> Thank you so much.

Loading...

Loading video analysis...