Godfather of AI WARNS: "You Have No Idea What's Coming"
By The Diary Of A CEO Clips
Summary
## Key takeaways - **AI race is unstoppable due to competition**: The pace of AI development cannot be slowed down because of the intense competition between countries and companies, making it a race that no single entity can afford to lose. [00:18], [00:27] - **AI will replace most intellectual labor**: Unlike past technological shifts, AI is poised to replace most mundane intellectual labor, similar to how machines replaced manual labor in the industrial revolution, leading to fewer jobs. [02:31], [03:00] - **Superintelligence is closer than we think**: Superintelligence, where AI surpasses human intelligence in nearly all aspects, could arrive within the next 10 to 20 years, posing significant challenges. [06:30], [10:45] - **Digital minds have key advantages over humans**: AI's digital nature allows for perfect replication, instant sharing of knowledge across billions of instances, and immortality, giving it fundamental advantages over biological human intelligence. [18:32], [20:54] - **AI will increase wealth inequality**: The widespread replacement of human labor by AI will likely exacerbate the gap between the rich and the poor, as companies that supply and use AI will benefit disproportionately. [16:31], [17:02]
Topics Covered
- Why Can't We Slow Down AI Development?
- AI Will Cause Mass Joblessness, Unlike Past Technologies
- Superintelligence Will Replace All Human Intelligence
- AI Will Worsen Wealth Inequality and Erode Human Dignity
- Digital AI's Inherent Advantages Make It Superior to Humans
Full Transcript
Are you at all hopeful that anything can
be done to slow down the pace and
acceleration of AI?
Okay, there's two issues. One is can you
slow it down?
Yeah.
And the other is can you make it so it
will be safe in the end. It won't wipe
us all out. I don't believe we're going
to slow it down.
Yeah.
And the reason I don't believe we're
going to slow it down is because there's
competition between countries and
competition between companies within a
country and all of that is making it go
faster and faster. And if the US slowed
it down, China wouldn't slow it down.
Does Ilia think it's possible to make AI
safe?
I think he does. He won't tell me what
his secret source is. I I'm not sure how
many people know what his secret source
is. I think a lot of the investors don't
know what his secret source is, but
they've given him billions of dollars
anyway because they have so much faith
in Asia, which isn't foolish. I mean he
was very important in alexnet which got
object recognition working well. He was
the main the main force behind the
things like GPG2
which then led to chat GPT.
So I think having a lot of faith in IA
is a very reasonable decision. There's
something quite haunting about the guy
that made and was the main force behind
GPT2 which led rise to this whole
revolution left the company because of
safety reasons. He knows something that
I don't know about what might happen
next.
Well, the company had now I don't know
the precise details. Um, but I'm fairly
sure the company had indicated that
would it would use a significant
fraction of its resources of the compute
time for doing safety research and then
it kept then it reduced that fraction. I
think that's one of the things that
happened.
Yeah, that was reported publicly.
Yes.
Yeah.
We've gotten to the autonomous weapons
part of the risk framework.
Right. So the next one is joblessness.
Yeah. In the past, new technologies have
come in which didn't lead to
joblessness. New jobs were created. So
the classic example people use is
automatic teller machines. When
automatic tele machines came in, a lot
of bank tellers didn't lose their jobs.
They just got to do more interesting
things. But here, I think this is more
like when they got machines in the
industrial revolution. And
you can't have a job digging ditches now
because a machine can dig ditches much
better than you can.
And I think for mundane intellectual
labor, AI is just going to replace
everybody. Now, it will may well be in
the form of you have fewer people using
AI assistance. So, it's a combination of
a person and an AI assistant and now
doing the work that 10 people could do
previously. People say that it will
create new jobs though, so we'll be
fine.
Yes. And that's been the case for other
technologies, but this is a very
different kind of technology. If it can
do all mundane human intellectual labor,
then what new jobs is it going to
create? You'd you'd have to be very
skilled to have a job that it couldn't
just do. So, I don't I don't think
they're right. I think you can try and
generalize from other technologies that
have come in like computers or automatic
tele machines, but I think this is
different. People use this phrase. They
say AI won't take your job. A human
using AI will take your job.
Yes, I think that's true. But for many
jobs, that'll mean you need far fewer
people. My niece answers letters of
complaint to a health service. It used
to take her 25 minutes. She'd read the
complaint and she'd think how to reply
and she'd write a letter. And now she
just scans it into um a chatbot and it
writes the letter. She just checks the
letter. Occasionally she tells it to
revise it in some ways. The whole
process takes her five minutes. That
means she can answer five times as many
letters. And that means they need five
times fewer of her so she can do the job
that five of her used to do. Now, that
will mean they need less people. In
other jobs, like in health care, they're
much more elastic. So, if you could make
doctors five times as efficient, we
could all have five times as much health
care for the same price, and that would
be great. There's there's almost no
limit to how much health care people can
absorb.
They always want more healthare if
there's no cost to it. There are jobs
where you can make a person with an AI
assistant much more efficient and you
won't lead to less people because you'll
just have much more of that being done.
But most jobs I think are not like that.
Am I right in thinking the sort of
industrial revolution
played a role in replacing muscles?
Yes. Exactly.
And this revolution in AI replaces
intelligence the brain.
Yeah. So,
so mundane intellectual labor is like
having strong muscles and it's not worth
much anymore.
So, muscles have been replaced. Now we
intelligence is being replaced.
Yeah.
So, what remains?
Maybe for a while some kinds of
creativity but the whole idea of super
intelligence is nothing remains. Um
these things will get to be better than
us at everything.
So, what what do we end up doing in such
a world? Well, if they work for us, we
end up getting lots of goods and
services for not much effort.
Okay. But that sounds tempting and nice,
but I don't know. There's a cautionary
tale in creating more and more ease for
humans in in it going badly.
Yes. And we need to figure out if we can
make it go well. So the the nice
scenario is imagine a company with a CEO
who is very dumb, probably the son of
the former CEO.
And he has an executive assistant who's
very smart and he says, "I think we
should do this." And the executive
assistant makes it all work. The CEO
feels great. He doesn't understand that
he's not really in control. And in in
some sense, he is in control. He
suggests what the company should do. She
just makes it all work. Everything's
great. That's the good scenario.
And the bad scenario,
the bad scenario, she thinks, "Why do we
need him?"
Yeah.
I mean, in a world where we have super
intelligence, which you don't believe is
that far away.
Yeah, I think it might not be that far
away. It's very hard to predict, but I
think we might get it in like 20 years
or even less. I made the biggest
investment I've ever made in a company
because of my girlfriend. I came home
one night and my lovely girlfriend was
up at 1:00 a.m. in the morning pulling
her hair out as she tried to piece
together her own online store for her
business. And in that moment, I
remembered an email I'd had from a guy
called John, the founder of Stanto, our
new sponsor and a company I've invested
incredibly heavily in. And Standtore
helps creators to sell digital products,
courses, coaching, and memberships all
through a simple customizable link in
bio system. And it handles everything,
payments bookings emails community
engagement, and even links with Shopify.
And I believe in it so much that I'm
going to launch a Stan challenge. And as
part of this challenge, I'm going to
give away $100,000 to one of you. If you
want to take part in this challenge, if
you want to monetize the knowledge that
you have, visit stephenbartlet.stan.
stan.store to sign up. And you'll also
get an extended 30-day free trial of
Stan Store if you use that link. Your
next move could quite frankly change
everything. Because I talked about
ketosis on this podcast and ketones, a
brand called Ketone IQ sent me their
little product here and it was on my
desk when I got to the office. I picked
it up. It sat on my desk for a couple of
weeks. Then one day, I tried it and
honestly, I have not looked back ever
since. I now have this everywhere I go.
When I travel all around the world, it's
in my hotel room, my team, I'll put it
there. Before I did the podcast
recording today that I've just finished,
I had a shot of Ketone IQ. And as is
always the case when I fall in love with
a product, I called the CEO and asked if
I could invest a couple of million quid
into their company. So, I'm now an
investor in the company as well as them
being a brand sponsor. I find it so easy
to drop into deep focused work when I've
had one of these. I would love you to
try one and see the impact it has on
you, your focus, your productivity, and
your endurance. So, if you want to try
it today, visit ketone.com/stephven
for 30% off your subscription. Plus,
you'll receive a free gift with your
second shipment. That's
ketone.com/stephven.
I'm excited for you. I am. So, what's
the difference between what we have now
and super intelligence? Because it seems
to be really intelligent to me when I
use like chatbt3 or Gemini or
Okay. So, it's already AI is already
better than us at a lot of things in
particular areas like chess for example.
Yeah,
AI is so much better than us that people
will never beat those things again.
Maybe the occasional win, but basically
they'll never be comparable again.
Obviously, the same in go in terms of
the amount of knowledge they have. Um,
something like GBT4 knows thousands of
times more than you do. There's a few
areas in which your knowledge is better
than its and in almost all areas it just
knows more than you do.
What areas am I better than it? probably
in interviewing CEOs. You're probably
better at that. You've got a lot of
experience at it. You're a good
interviewer. You know a lot about it. If
you tried if you got GPT4 to interview a
CEO, probably do a worse job.
Okay.
I'm trying to think if that if I agree
with that statement. Uh GPT4, I think
for sure.
Yeah.
Um but I but I guess you could
but it may not be long before
Yeah. I guess you could train one on
this. how I ask questions and what I do
and
Sure.
And if you took a general purpose sort
of foundation model and then you trained
it up on not just you but every every
interviewer you could find doing
interviews like this
but especially you you'll probably get
to be quite good at doing your job but
probably not as good as you for a while.
Okay. So there's a few areas left and
then super intelligence becomes when
it's better than us at all things.
When it's much smarter than you and
almost all things is better than you.
Yeah.
And you you say that this might be a
decade away or so.
Yeah, it might be. It might be even
closer. Some people think it's even
closer and might well be much further.
It might be 50 years away. That's still
a possibility. It might be that somehow
training on human data limits you to not
being much smarter than humans. My guess
is between 10 and 20 years we'll have
super intelligence.
On this point of joblessness, it's
something that I've been thinking a lot
about in particular because I started
messing around with AI agents and we
released an episode on the podcast
actually this morning where we had a
debate about AI agents with some a CEO
of a big AI agent company and a few
other people. And it was the first
moment where I had no it was another
moment where I had a Eureka moment about
what the future might look like when I
was able in the interview to tell this
agent to order all of us drinks and then
5 minutes later in the interview you see
the guy show up with the drinks and I
didn't touch anything. I just told it to
order us drinks to the studio
and you didn't know about who you
normally got your drinks from. It
figured that out from the web.
Yeah, figured out because it went on
Uber Eatats. It has my my my data, I
guess, and it we put it on the screen in
real time, so everyone at home could see
the agent going through the internet,
picking the drinks, adding a tip for the
driver, putting my address in, putting
my credit card details in, and then the
next thing you see is the drinks show
up.
So, that was one moment. And then the
other moment was when I used a tool
called Replet, and I built software by
just telling the agent what I wanted.
Yes. It's amazing, right?
It's amazing and terrifying at the same
time.
Yes. Because
and if it can build software like that,
right?
Yeah.
Remember that the AI when it's training
is using code and if it can modify its
own code,
then it gets quite scary, right?
Because it can modify.
It can change itself in a way we can't
change ourselves. We can't change our
innate endowment, right?
There's nothing about itself that it
couldn't change.
On this point of joblessness, you have
kids.
I do.
And they have kids. No, they don't have
kids. No grandkids yet. What would you
be saying to people about their career
prospects in a world of super
intelligence? What should we we be
thinking about?
Um, in the meantime, I'd say it's going
to be a long time before it's as good at
physical manipulation as us.
Okay.
And so, a good bet would be to be a
plumber.
Until the humanoid robots show up. in
such a world where there is mass
joblessness which is not something that
you just predict but this is something
that Sam Alman open AI I've heard him
predict and many of the CEOs Elon Musk I
watched an interview which I'll play on
screen of him being asked this question
and it's very rare that you see Elon
Musk silent for 12 seconds or whatever
it was
and then he basically says something
about he actually is living in suspended
disbelief i.e. he's basically just not
thinking about it. When you think about
advising your children on a career with
so much that is changing, what do you
tell them is going to be of value?
Well,
that is a tough question to answer. I
would just say, you know, to to sort of
follow their heart in terms of what they
they find um interesting to do or
fulfilling to do. I mean, if I think
about it too hard, frankly, it can be uh
dispariting and uh demotivating. Um
because I mean, I I go through I mean I
I I've put a lot of blood, sweat, and
tears into building the companies and
then it and then I'm like, wait well,
should I be doing this? Because if I'm
sacrificing time with friends and family
that I would prefer to to to but but
then ultimately the AI can do all these
things. Does that make sense? I I don't
know. Um to some extent I have to have
deliberate suspension of disbelief in
order to to remain motivated. Um, so I I
guess I would say just, you know,
work on things that you find
interesting, fulfilling, and um and and
that contribute uh some good to the rest
of society.
Yeah. A lot of these threats, it's very
hard to intellectually you can see the
threat, but it's very hard to come to
terms with it emotionally.
Yeah.
I haven't come to terms with it
emotionally yet.
What do you mean by that?
I haven't come to terms with what the
development of super intelligence could
do to my children's future.
I'm okay. I'm 77.
I'm going to be out of here soon. But
for my children and my my younger
friends, my nephews and nieces and their
children um
I just don't like to think about what
could happen.
Why?
Cuz it could be awful.
In In what way?
Well, if I ever decided to take over. I
mean, it would need people for a while
to run the power stations until it
designed better analog machines to run
the power stations. There's so many ways
it could get rid of people, all of which
would of course be very nasty.
Is that part of the reason you do what
you do now?
Yeah. I I mean, I think we should be
making a huge effort right now to try
and figure out if we can develop it
safely.
Are you concerned about the midterm
impact potentially on your nephews and
your your kids in terms of their jobs as
well?
Yeah, I'm concerned about all that.
Are there any particular industries that
you think are most at risk? People talk
about the creative industries a lot and
sort of knowledge work. They talk about
lawyers and accountants and stuff like
that.
Yeah. So, that's why I mentioned
plumbers. I think plumbers are less at
risk.
Okay, I'm going to become a plumber.
Someone like a legal assistant, a
parallegal.
Um they're not going to be needed for
very long.
And is there a wealth inequality issue
here that will will arise from this?
Yeah, I think in a society which shared
out things fairly, if you get a big
increase in productivity, everybody
should be better off.
But if you can replace lots of people by
AIS,
then the people who get replaced will be
worse off
and the company that supplies the AIS
will be much better off
and the company that uses the AIS. So
it's going to increase the gap between
rich and poor. And we know that if you
look at that gap between rich and poor,
that basically tells you how nice the
society is. If you have a big gap, you
get very nasty societies in which people
live in walled communities and put other
people in mass jails. It's not good to
increase the gap between rich and poor.
The International Monetary Fund has
expressed profound concerns that
generative AI could cause massive labor
disruptions and rising inequality and
has called for policies that prevent
this from happening. I read that in the
business insider.
So, have they given any of what the
policies should look like?
No. Yeah, that's the problem. I mean, if
AI can make everything much more
efficient and get rid of people for most
jobs or have a person assisted by I
doing many many people's work, it's not
obvious what to do about it.
It's universal basic income.
Give everybody money.
Yeah, I I I think that's a good start
and it stops people starving. But for a
lot of people, their dignity is tied up
with their job. I mean, who you think
you are is tied up with you doing this
job right?
Yeah.
And if we said, "We'll give you the same
money just to sit around," that would
impact your dignity.
You said something earlier about it
surpassing or being superior to human
intelligence. A lot of people, I think,
like to believe that AI is is on a
computer and it's something you can just
turn off if you don't like it.
Well, let me tell you why I think it's
superior.
Okay.
Um, it's digital. And because it's
digital, you can have you can simulate a
neural network on one piece of hardware.
Yeah.
And you can simulate exactly the same
neural network on a different piece of
hardware.
So you can have clones of the same
intelligence.
Now you could get this one to go off and
look at one bit of the internet and this
other one to look at a different bit of
the internet. And while they're looking
at these different bits of the internet,
they can be syncing with each other. So
they keep their weights the same, the
connection strengths the same. Weights
are connection strengths.
Mhm.
So this one might look at something on
the internet and say, "Oh, I'd like to
increase this strength of this
connection a bit." And it can convey
that information to this one. So it can
increase the strength of that connection
a bit based on this one's experience.
And when you say the strength of the
connection, you're talking about
learning.
That's learning. Yes. Learning consists
of saying instead of this one giving 2.4
four votes for whether that one should
turn on. We'll have this one give 2.5
votes for whether this one should turn
on.
And that will be a little bit of
learning.
So these two different copies of the
same neural net
are getting different experiences.
They're looking at different data, but
they're sharing what they've learned by
averaging their weights together.
Mhm.
And they can do that averaging at like a
you can average a trillion weights. When
you and I transfer information, we're
limited to the amount of information in
a sentence. And the amount of
information in a sentence is maybe a 100
bits. It's very little information.
We're lucky if we're transferring like
10 bits a second.
These things are transferring trillions
of bits a second. So, they're billions
of times better than us at sharing
information.
And that's because they're digital. And
you can have two bits of hardware using
the connection strengths in exactly the
same way. We're analog and you can't do
that. Your brain's different from my
brain. And if I could see the connection
strengths between all your neurons, it
wouldn't do me any good because my
neurons work slightly differently and
they're connected up slightly
differently.
Mhm.
So when you die, all your knowledge dies
with you. When these things die, suppose
you take these two digital intelligences
that are clones of each other and you
destroy the hardware they run on. As
long as you've stored the connection
strength somewhere, you can just build
new hardware that executes the same
instructions. So, it'll know how to use
those connection strengths and you've
recreated that intelligence. So, they're
immortal. We've actually solved the
problem of immortality, but it's only
for digital things.
So, it knows it will essentially know
everything that humans know but more
because it will learn new things.
It will learn new things. It would also
see all sorts of analogies that people
probably never saw.
So, for example, at the point when GPT4
couldn't look on the web, I asked it,
"Why is a compost heap like an atom
bomb?"
Off you go.
I have no idea.
Exactly. Excellent. Most that's exactly
what most people would say. It said,
"Well, the time scales are very
different and the energy scales are very
different." But then I went on to talk
about how a compost he as it gets hotter
generates heat faster and an atom bomb
as it produces more neutrons generates
neutrons faster.
And so they're both chain reactions but
at very different time in energy scales.
And I believe GPT4 had seen that during
its training.
It had understood the analogy between a
compost heap and an atom bomb. And the
reason I believe that is if you've only
got a trillion connections, remember you
have 100 trillion.
And you need to have thousands of times
more knowledge than a person, you need
to compress information into those
connections. And to compress
information, you need to see analogies
between different things. In other
words, it needs to see all the things
that are chain reactions and understand
the basic idea of a chain reaction and
code that code the ways in which they're
different. And that's just a more
efficient way of coding things than
coding each of them separately.
So it's seen many many analogies
probably many analogies that people have
never seen. That's why I also think that
people who say these things will never
be creative. They're going to be much
more creative than us because they're
going to see all sorts of analogies we
never saw. And a lot of creativity is
about seeing strange analogies. If you
love the D CEO brand and you watch this
channel, please do me a huge favor.
become part of the 15% of the viewers on
this channel that have hit the subscribe
button. It helps us tremendously and the
bigger the channel gets, the bigger the
guests.
Loading video analysis...