AI Whistleblower: We Are Being Gaslit By AI Companies, They’re Hiding The Truth! - Karen Hao
By The Diary Of A CEO
Summary
## Key takeaways - **Companies weaponize existential fear for control**: AI companies deliberately use apocalyptic narratives to justify their dominance over the technology, claiming only they can prevent catastrophe, while the actual risks remain unclear and contested. [01:08:33], [01:09:08] - **Altman mirrored Musk's language to recruit him**: Sam Altman strategically adopted Elon Musk's exact rhetoric about AI being humanity's greatest existential threat in 2015 to convince Musk to co-found OpenAI, revealing calculated narrative manipulation. [01:05:27], [01:06:47] - **Data annotation creates industrial dehumanization**: Laid-off professionals—including award-winning directors—are forced into data annotation work that mechanizes their lives, with mothers reporting they screamed at children because they couldn't pause from annotation tasks. [01:44:58], [01:45:56] - **Executives privately doubt Altman yet he returned**: Both Ilya Sutskever and Greg Brockman convinced the board that Altman was unfit to 'have the finger on the button for AGI,' yet after his firing, pressure forced his reinstatement days later. [01:26:31], [01:27:12] - **AI concentrates power in the 'halves and have nots'**: Vulnerable communities host power-hungry data centers competing for freshwater, breathing toxic air from methane turbines, while CEOs gain more free time and richer lives. [01:53:42], [01:54:42] - **Scale myths mask financial motives**: Leaders predicting AI will automate all jobs profit enormously from this myth, yet their internal decisions show they pick capabilities based on which industries pay most, not on general intelligence. [01:15:01], [01:15:22]
Topics Covered
- AI Companies Train Models On Jobs They Eliminate
- The Broken Career Ladder: AI Is Eliminating The Rungs
- AI Might Be The First Technology To Actually Connect Us
Full Transcript
So much of what's happening today in the AI industry is extremely inhumane.
But this is me playing devil's advocate.
And logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization.
No, it's not. This is a prediction that you're making, right?
Making Zuckerberg's making.
And do you know what the common feature of all of them is? They profit
enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit. So, what do we do
about it?
We need to break up the empires of AI.
You know, I've been covering the tech industry for over 8 years, interviewed over 250 people, including former or current OpenAI employees and executives.
And I can tell you that there are many parallels between the empires of AI and the empires of old, right? like Lelay
claimed the intellectual property of artists, writers, and creators in the pursuit of training these models.
Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off and then they work to train the models on the very job that they were just laid off in, which will then perpetuate more
layoffs if that model then develops that skill. And when they talk about that
skill. And when they talk about that there's going to be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there. And
then there's the environmental and public health crisis that these companies have created and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and
will censor researchers that are inconvenient to the empire's agenda. But
what I'm saying is not that these technologies don't have utility. It's
that the production of these technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed in a
different way that doesn't have all of these unintended consequences. So let's
talk about all of that.
This is super interesting to me. My team
given me this report to show me how many of you that watch this show subscribe.
And some of you have told us according to this that you are unsubscribed from the channel randomly. So favor to ask all of you. Please could you check right now if you've hit the subscribe button if you are a regular viewer of the show and you like what we do here. We're
approaching quite a significant landmark on this show in terms of a subscriber number. So if there was one simple free
number. So if there was one simple free thing that you could do to help us, my team, everyone here to keep this show free, to keep it improving year over year and week over week, it is just to hit that subscribe button and to double
check if you've hit it. Only thing I'll ever ask of you, do we have a deal? If
you do it, I'll tell you what I'll do.
I'll make sure every single week, every single month, we fight harder and harder and harder and harder to bring you the guests and conversations that you want to hear. I've stayed true to that
to hear. I've stayed true to that promise since the very beginning of the D of Sio and I will not let you down.
Please help us. Really appreciate it.
Let's get on with the show.
Karen, how you've written this book in front of me here called Empire of AI: Dreams and Nightmares in Sam Alman's Open AI. I guess my first question is
Open AI. I guess my first question is what is the research and the journey you went on in order to write this book we're going to talk about and the subjects within it today I took a strange route into journalism I
studied mechanical engineering at MIT and so when I graduated I moved to San Francisco I joined a tech startup I became part of Silicon Valley and I basically received an education in what
Silicon Valley is about because a few months into joining a very missiondriven startup that was focused on building technologies that would help facilitate the fight against climate change. The
board fired the CEO because the company was not profitable. And this was in hindsight a very pivotal moment for me because I thought if this hub is
ultimately geared towards building profitable technologies and many of the problems in the world that I think need solved are not profitable problems like
climate change. Then what are we
climate change. Then what are we actually doing here? like what how did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even
undermining the public benefit in pursuit of profit. In that moment, I had a bit of a crisis where I thought, well, I just spent 4 years trying to set
myself up for this career that I now don't think I am cut out for. And I
thought, well, I might as well just try something totally different. I've always
liked writing and that's how after 2 years I landed at a role at MIT technology review covering AI full-time and that gave me a space to then explore
all of these questions of who gets to decide what technologies we build how does money and ideology also drive the production of those technologies and how do we ultimately make sure that we
actually reimagine the innovation ecosystem to work for a broad base of people all around the world. And so that is kind of how I then set off on this
journey of ultimately writing a book. I
didn't realize that I was working towards writing a book, but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I I document
in it.
A very timely time to start working in artificial intelligence. For anyone that
artificial intelligence. For anyone that doesn't know, this is pre OpenAI chat GPT launch moment that shook the world.
But in writing this book, you interviewed a lot of people and went to a lot of places. Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, etc. I interviewed over 250 people. So over
300 interviews, over 90 of those people were former or current OpenAI employees and executives. So the book covers the
and executives. So the book covers the inside story of opening eyes's first decade and how it ultimately got to where it is today. But I didn't want to write a corporate book. I felt very
strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley. These
companies tell us that AI is going to benefit everyone and that's their mission. But you really start to see
mission. But you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley, that speak nothing like Silicon Valley, and that have a history
and culture that are fundamentally different as well. And that's where you start to really understand the true reality of how this industry is unfolding around us.
Karen, I often try and steer conversations, but in this situation, I feel like it's probably my responsibility to follow. So with that in mind, I'm going to ask you where does this journey begin and where should we
be starting if we're talking about the subjects of empire of AI, AI generally artificial intelligence and also I'd say one thing I'm really keen to do in this conversation which is I often see in
conversations is left out is let's assume that our viewers know nothing about AI.
Yeah. So they don't know what scaling laws are or GPUs or comput or whatever and let's try and keep this as simple as we possibly can in terms of language or explain all the complicated language so that we can bring as much people with us
as we possibly can.
Yes.
Where should we start?
I think we should start with when AI started as a field. So this was back in 1956 and there were a group of scientists that gathered at Dartmouth University to
start a new discipline, a scientific discipline to try and chase an ambition.
And specifically an assistant professor at Dartmouth University, John McCarthy decided to name this discipline artificial intelligence.
This was not the first name that he tried. The previous year he tried to
tried. The previous year he tried to name it Automata Studies. And the reason why some of his colleagues were concerned about this name was because it
pegged the idea of this discipline to recreating human intelligence. And back
then, as is true today, we have no scientific consensus around what human intelligence is. There's no definition
intelligence is. There's no definition from psychology, biology, neurology. And
in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives. It's
been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people. There are no goalposts for this
people. There are no goalposts for this field and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems
that would be as smart as humans. How do
we even define what that means? And when
are we going to get there if we don't know how to define the destination? And
what that effectively means is that these companies can just use the term artificial general intelligence which is now the term to refer to this ambitious
um goal to recreate human intelligence.
They can use it however they want to and they can define and redefine it based on what is convenient for them. So in
OpenAI's history, it has defined and redefined it many times. When Sam Alman is talking with Congress, AGI is a system that's going to cure cancer, solve climate change, cure poverty. When
he's talking with consumers that he's trying to sell his products to, it's the most amazing digital assistant that you're ever going to have. When he was talking with Microsoft, you know, in the
deal that OpenAI and Microsoft struck where Microsoft invested in the company, it was defined as a system that will generate hundred billion of revenue. And
on OpenAI's own website, they define it as highly autonomous systems that outperform humans in most economically valuable work. This is like not a
valuable work. This is like not a coherent vision of one technology. These
are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy in
into the the industry's quest or to get more capital more resources for continuing on this journey with ambiguous definitions. I mean, speaking
ambiguous definitions. I mean, speaking about different definitions through time, in 2015, in a blog post that Sam Waltman wrote before open air was officially announced, he explicitly
outlined the existential risk by saying, "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I
humanity. There are other threats that I think are more certain to happen, for example, an engineered virus, but AI is probably the most likely way to destroy everything
in general." When Alman is writing for
in general." When Alman is writing for the public or speaking for the public, he does not just have the public as the audience in mind, there are other people
that he is trying to motivate or mobilize when he says these things. And
in that particular moment, Alman was trying to convince Elon Musk to join him on co-founding OpenAI. And Musk in particular was spending all of his time
sounding the alarm on what he saw as a huge existential threat that AI could pose. And so in that blog post, if you
pose. And so in that blog post, if you look at the the language that Alman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk was saying
identical. I mean, 10 years ago, Musk
identical. I mean, 10 years ago, Musk was going on podcast saying, tweeting, whatever, that the greatest existential risk to humanity was AI.
Yeah. And so you know like his parenthetical there are other things that we that might actually be more likely to happen like engineered viruses. It's because up until then
viruses. It's because up until then Alman had been talking just about engineered viruses. And so now that he
engineered viruses. And so now that he needs to pivot to speak to an audience of one to Musk. He needs to kind of resolve the contradiction between what
he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying. So that's why he's like I
been saying. So that's why he's like I think this is now even though before I said this and are you saying that Sam Alman manipulated Musk because Elon did end up
donating a huge amount of money to um open AAI and co-founding it I believe with Sam Alman. Elon Musk did end up co-ounding it with Altman. And certainly
from Musk's perspective, he does feel manipulated because he feels like Alman was engineering his language in a way
that would make Musk trust him as a a partner in this endeavor. And of course then Musk is leaves. Um and through some of the documents that came out during
the the lawsuit that Musk and Altman are engaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit. And
so that's why he's left with this very intense personal vendetta against Altman, saying that somehow Alman tricked him into being part of this. So
in in 2015, Sam Alman is writing these blog posts saying this is, you know, one of the greatest existential threats. At
the same time, in 2015, Musk is doing some very famous speeches at the time at MIT. He said that AI was the biggest
MIT. He said that AI was the biggest existential threat and compared developing AI to summoning the demon.
And what you're saying here is you're saying that Samman was just mirroring the language that Elon was using to get Elon involved in open open AAI. And
later it appears and again there's a legal case taking place now that Sam might have muscled Elon out in some capacity.
Yeah. So we know from the lawsuit and the documents that have come out in the lawsuit that Ilia Sgver who is the chief scientist of OpenAI at the time and Greg
Brockman chief technology officer at the time when they were deciding whether or not to maintain OpenAI as a nonprofit because it was originally founded as a nonprofit. They decided okay we need to
nonprofit. They decided okay we need to create a for-profit entity but the question was who should be the CEO of this for-profit entity. Should it be Musk or should it be Alman? because it's
they were the two co-chairmen of the nonprofit. And in the emails, it became
nonprofit. And in the emails, it became clear that Ilia and Greg first chose Musk to be the CEO.
But through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his that they had known, they had known each other for many years through the Silicon
Valley scene, and said, "Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, this new for-profit entity, because, you
know, he's a famous guy. He has a lot of pressures in the world. He could be threatened. He could act erratically. He
threatened. He could act erratically. He
could be unpredictable. And do we really want a technology that could be super powerful in the future to end up in the hands of this man? And that convinced
Greg and Greg then convinced Ilia, you know, I think there's a point here. Do
we really want to give this much power to Musk? And that is why Musk then
to Musk? And that is why Musk then leaves because then they the two switch their allegiances. They say, "Actually,
their allegiances. They say, "Actually, we want Altman to be the CEO." And then Musk is like, "If I'm not CEO, I'm out."
So, it sounds like Sam again managed to persuade someone to do something.
Mhm.
I guess this begs the question, what do you think of Sam Orman?
I think he's a very controversial figure.
You did an interesting pause. It's a
pause where someone tries to select their words. Well, this is this is this
their words. Well, this is this is this is what's so interesting about those interviews is people are extremely polarized on Alman there. No
one has in between feelings about him.
Either they think he's the greatest tech leader of this generation akin to the Steve Jobs of the modern era or they think that he's really manipulative and
an abuser and a liar. And what I realized because I interviewed so many people is it really comes down to what that person's vision of the future is
and what their goals are. So if you align with Altman's vision of the future, you're going to think he's the greatest asset ever to have on your side because this man is really persuasive.
He's incredible at telling stories. He's
incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that future happen. But if you don't agree
future happen. But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision
even if you fundamentally don't agree with it. And this is the story
with it. And this is the story especially of Daria Amade, CEO of Enthropic, who was originally an executive at OpenAI. So for people that don't know, Dario now runs anthropic
which is the maker of Claude. A lot of people probably are more familiar with Claude.
Yeah. And it's one of the biggest competitors to OpenAI.
And Amade at the time when he was an ex executive at OpenAI, he thought that Alman was on the same page with him and then over time began
to feel that Altman was actually on exactly the opposite page of him and felt that Altman had used Amade's
intelligence, capabilities, skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with. And so
that's why people end up with this bad taste in their mouths. And so, you know, I've been covering the tech industry for over eight years and covered many companies. I've covered Meta, Google,
companies. I've covered Meta, Google, Microsoft in addition to Open AI. and
OpenAI and Altman is it's the only figure that I've seen this degree of polarization with where people cannot decide
whether he's the greatest or the worst.
You mentioned Dario there and I found it really what I found really interesting is to look at how people's quotes evolve over time with their incentives. So I
was looking at all of the all of the things they've said on the record on podcasts in their blog post to see how it's evolved over time and Dario who was the former VP of research open AAI and has now moved on to enthropic who are
taking a slightly different approach to developing AI said back in 2017 while he was still at open AI that this is a quote I think at the extreme end is the
Nick Bostonramm style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen. My chance that
couldn't happen. My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%.
And also you mentioned Ilia who was a co-founder of OpenAI and then left. I
guess the first question I'd ask is why did I leave?
It's a great question.
So he was instrumental in trying to get Sam Alman fired and he's another one of the people who over time began to feel like he was being manipulated by Alman
towards contributing something that he didn't believe in. And for
you know because I interviewed a lot of people Ilia in particular had two pillars that he cared about deeply.
One is making sure we get to so-called AGI and the other is making sure that we get to it safely. And he felt that Altman was actively undermining both
things. He felt that Alman was creating
things. He felt that Alman was creating a very chaotic environment within the company where he was pitting teams against each other where he was telling different things to different people.
Have you ever spoken to him?
I have. So, so I interviewed him in 2019 for a profile that I did of OpenAI um for MIT Technology Review and back in 2019, he has a quote where he says, "The future's going to be good
for AIs regardless. It would be nice if it was also good for humans as well.
It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful. And I
think a good analogy would be the way that humans treat animals. It's not that we hate animals. I think humans love animals, and I have a lot of affection for them. But when the time comes to
for them. But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's
permission. We just do it because it's important to us. And I think by default, that's the kind of relationship that's going to be between us and AI, which are
truly autonomous and operating on their own behalf. And that was in 2019, the
own behalf. And that was in 2019, the year that you interviewed him.
One of the things that I I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence? And a huge part of
the views of the different people and the quotes that you're reading derives from a specific belief that they each have in this question of what is intelligence, what constitutes
intelligence.
For Ilia, he has throughout his research career felt that ultimately our brains are giant statistical models. This is
not something that you know we actually know but this is his own hypothesis also the hypothesis of his mentor Jeffrey Hinton who also was on this podcast.
This is why they have such a strong conviction in the idea of building AI systems that are statistical models and that this particular approach is going
to lead to intelligent systems as we are intelligent. It's a hypothesis that they
intelligent. It's a hypothesis that they have. It's not one that has been proven
have. It's not one that has been proven by science. And some people vehemently
by science. And some people vehemently disagree with them on this particular thing. But if you step into their shoes
thing. But if you step into their shoes and take on that hypothesis and assume that it's true, that our brains are in fact statistical engines and that these
systems that they're building are also statistical engines, that they're making bigger and bigger and bigger until they become the size of the human brain.
That's why they say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence is
relevant in their framework. And um Ilia gave a talk at one point at this really prominent AI research conference that happens every year called neural information processing systems. It's a
mouthful, but he gave this keynote where he shows this chart of the size of brains and the intelligence of a
species. And it's roughly linear. The
species. And it's roughly linear. The
bigger the size of the brain, the more intelligent the species. And so for him, he thinks he's building a digital brain because he he thinks brains are just
statistical engines. So from that logic
statistical engines. So from that logic it's like okay if we then build a bigger statistical engine than the human brain then based on this chart it will be more
intelligent and then we will be subjected to the same treatment that we've subjected animals but it's really important to understand that these are scientific hypotheses of specific
individuals within the AI research community and there's a lot a lot of debate about whether this is in fact the case and some of The biggest critics say
it's very reductive to think of our brains as simply just statistical engines.
Why why does it matter to know the mechanism?
Is it not just important to know the outcome which is that it's going to be able to do make a video for me or agents are going to be able to do the work that I do. Does it does it really really
I do. Does it does it really really matter for us to know the mechanism behind it?
Yes and no. So it matters because these companies they are driving their future actions based on this hypothesis.
So they have decided we think that this hypothesis is true like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence. And
that's then having global consequences like in order to continue doing that they're hoovering up more and more data.
They're building more and more data centers. They are having uh they're, you
centers. They are having uh they're, you know, exploiting more and more labor in order to continue on this path. Here's a
question that I think is important to ask is why are we trying to build AI systems that are duplicative of humans?
We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing. Like they said that we should be
thing. Like they said that we should be building AGI, so we say that we should be building AGI. I would like to ask like why are we doing that? Why is it that we are building a technology that
is ultimately designed to replace and automate people away? That is not the enterprise of technology. Like we should be building technology and the purpose
of technology throughout history has been to improve human flourishing, not to replace people. And so this is like a
a critical part of my critique of these companies and and these scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources
to pursue it. Is is this the right goal?
What like why are we doing this? Why
can't we just build AI systems that do things like accelerate drug discovery and improve people's health care outcomes, which are systems that have nothing to do with the statistical
engines that they're trying to build to duplicate the human brain?
So why are they doing it? I mean, you've interviewed all these people. I think
it's what, 300 people in total, 80 or 90 of them from OpenAI, the maker of CHACHBC. Why do you think they're doing
CHACHBC. Why do you think they're doing it?
I think it's because they're driven by an imperial agenda. And that is why I call these companies empires of AI.
What do you mean by an imperial agenda?
What does that term mean?
Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do and the scale that they operate and
what motivates them to do what they do.
And there are many parallels that you see between what I call the empires of AI and the empires of old. They lay
claim to resources that are not their own in the pursuit of training these models. That's the data of individuals,
models. That's the data of individuals, the intellectual property of artists, writers, and creators. Their land
grabbing in order to build these supercomputer facilities for training the next generation models. Second, they
exploit an extraordinary amount of labor. They contract hundreds of
labor. They contract hundreds of thousands of workers all around the world including in the US to ultimately make these technologies. We can talk
about that more. And they also design their tools to be labor automating so that when the technologies are deployed, it also affects labor rights because it
erodess away labor rights. And this is a political choice that they have. Third,
they monopolize knowledge production.
And so they project this idea that they're the only ones that really understand how the technology works. And
so if the public doesn't like it, it's because they don't actually know enough about this technology. They do this to the public. They do this to policy
the public. They do this to policy makers. And they've also captured the
makers. And they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.
You think they're gaslighting the public in a way?
They are. Yeah. So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?
No.
And in the same way they employ and bankroll the AI industry employs and bankrolls most of the AI researchers in the world. So they set the agenda on AI
the world. So they set the agenda on AI research in soft ways simply by funneling money to their priorities so that only certain types of AI research
are produced. But they also will censor
are produced. But they also will censor researchers when they do not like what the researcher has found. And so I talk about the case of Dr. Timmy Gabru in my
book who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that
Google was building. She then co-wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful
outcomes. And in an attempt to try and
outcomes. And in an attempt to try and stop this research from being published, Google ended up firing Gabru and then fired her other co-lead Margaret
Mitchell.
And so they control and quash the research that is inconvenient to the empire's agenda.
Did you have an example where this is happening to journalists as well that are asking questions of their team members? I think I was watching a video
members? I think I was watching a video of yours where there was a young man that was saying he had someone show up at his door, knocked on his door and asked for information, emails, text messages, and this person was from one
of the big AI companies.
This was opening. I started subpoenaing some of its critics. Yeah. Um as a as part of a what's what appears to be a campaign of intimidation, but also what appeared to
be a campaign of fishing for more information to figure out to map out the network of critics further. But this was a man who runs a small watchdog
nonprofit and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for-profit. Ultimately, OpenAI was
for-profit. Ultimately, OpenAI was successful in that conversion. But
during the period where it was sort of existential for open AI to complete this conversion, there were a lot of civil society groups and watchdog groups like
MIDAS who were trying to prevent the process from happening in the dead of night. They were trying to get more
night. They were trying to get more transparency. They were trying to have
transparency. They were trying to have more public debate about this because it's unprecedented. And it was then that
it's unprecedented. And it was then that um there was a knock on his door and he was served papers.
What did the papers say?
The papers asked him to reproduce every single piece of communication that he had had that might have involved Musk.
So this was like this strange paranoia that OpenAI had that Musk was somehow funding these people to block the conversion. None of them were actually
conversion. None of them were actually funded by Musk. So in this particular case their request he simply was just answered you know I I don't have any documents because this doesn't exist.
So going back to this point of empires you were saying that one of the factors of an empire is a land grab and then the next one was was labor exploitation labor exploitation. The third one,
labor exploitation. The third one, controlling knowledge production.
And one of the other ones that's really important to understand about the AI empires in particular is empires always
have this narrative that they they say to the public like we're the good empire and we need to be an empire in the first place because there are also bad empires
in the world. And if you allow us to take all the resources and use all of the labor, then we promise we will bring you progress and modernity for everyone.
We will bring you to this utopic state akin to an AI heaven. But if the evil empire does it first, we will descend into a hell.
And the evil empire being in this case, in this case, most often it's China. But
actually in the early days, Open AI evoked Google as the evil empire.
So all of their decisions were about we need to do it first because otherwise Google, this evil corporation that's driven by profit, us as a benevolent
nonprofit. Like this is a this is a
nonprofit. Like this is a this is a critical contest of who wins.
Do you think the people building these AI companies believe that the outcome is going to be all good now? Do you think they think that it's going to be it's going to serve everyone? It's going to
be the age of abundance. Everything's
going to go up well. What do you think they believe? What do you think Sam
they believe? What do you think Sam believes?
So, so this is so funny is such a core part of the mythology that they create around the AI industry includes the
belief that it could go very badly. It
goes hand in hand. like they need that part of the myth in order to then say and that's why we need to be in control of the technology because that's the only way that it's going to go really
really well and Alman has said publicly you know the worst case lights out for everyone but best case we cure cancer we solve climate change and there's
abundance and Dario Amade same kind of rhetoric was like worst case catastrophic or existential harm for humanity best case mass human
flourishing. So this is like two sides
flourishing. So this is like two sides of the same coin. Like they have to use both of these narratives in order to continue justifying an extremely
anti-democratic approach to AI development where there should not be broad participation in developing this technology. They must be the ones
technology. They must be the ones controlling it at every step of the way.
Sam Orman did a tweet saying, "There are some books coming out about open AI and me. We only participated in two of them.
me. We only participated in two of them.
one by Kesh Hegy Keegy Khaggy focused on me and one by Ashley Vance on OpenAI.
Um he went on to say no book will get everything right especially when some people are so intent on twisting things but these two authors are trying to
you quote retweeted that tweet from Sam Alman and you said the unnamed book empire of AI is mine.
Do you believe that tweet from Sam Alman was in reference to your book?
100%. Because there's only three books coming out about him and he had caught wind that your book was coming out and he knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said I'm
working on a book now. Will you
participate in it? And actually
initially they said yes even though so my history with OpenAI I profiled the company for MIT technology review. I
embedded within the office for 3 days in 2019. my profile comes out in 2020, the
2019. my profile comes out in 2020, the leadership are very unhappy. And in my book, I actually quote an email that I received that Sam Alman sent to the
company about my profile saying, "Yeah, this is not great."
And from then on, the company's stance to me was, "We are not going to participate in anything that you do. we are not going
to respond to anything any of the questions that you receive. And this
was, you know, this was things that they explicitly articulated. It wasn't like
explicitly articulated. It wasn't like me inferring. Um, so I I had a a
me inferring. Um, so I I had a a colleague at MIT Technology Review that also covered AI. And at one point opening, I sent him this press release being like, "We would love for you to cover this story." And he was like, "I'm
really busy. Will you send it to Karen?"
really busy. Will you send it to Karen?"
And they were like, "Oh, no. We have a history. You understand?" And so, so for
history. You understand?" And so, so for three years they they refused to talk to me, but then I ended up at the Wall Street Journal where if they felt a a
bit compelled because it was the journal to reopen the lines of communication.
And so I I I started having, you know, more dialogue with them. Every time I wrote a piece, I would always send them here's my request for comment. I would
always ask them like, will you sit for interviews? And we did get to a more
interviews? And we did get to a more productive relationship. And then I
productive relationship. And then I embarked on the book. So I I left the journal to focus on the book full-time.
And I told them right away, I'm working on this book. I want to continue this productive conversation where I make sure I reflect OpenAI's perspective in
the book. And so they were like, we can
the book. And so they were like, we can arrange interviews for you. You can come back to the office. We'll set up some conversations.
And then as we were going back and forth on this, the board fired Sam Alman.
And that's when things started going kind of south because the company started becoming very sensitive to scrutiny. And so then they started
scrutiny. And so then they started pushing kicking the can down the road, down the road, down the road. And I kept saying, "Hey, when are we rescheduling this? What's going on?" And then I get
this? What's going on?" And then I get an email saying, "We are not going to participate at all. You are not coming to the office. You're not doing interviews." and I had actually already
interviews." and I had actually already booked my tickets. So, I was already going to fly to San Francisco to have the the interviews. And so, then I told
them I was like, "That's fine. I will
still engage in the process where I'll give you extensive requests for comment.
I'll ask through my reporting, I'll keep you updated on all the things that I'm finding so that you can choose to still comment." I gave them 40 pages of
comment." I gave them 40 pages of requests for comment. and I gave them over a month to respond to all of that.
So, this was when the tweet came out was we were doing all this back and forth trying to and that's when Alman tweeted this.
H and they never responded to a single one of the one of the 40 pages.
Sam Alman does a lot of interviews.
Yeah.
You know, he's doing a lot of interviews all the time. He's done every podcast.
I've seen him on everything from Tucker Carlson to I think he's done Theo, Joe Rogan, um podcasts all over the world.
I wonder why he won't do mine.
Well, maybe.
I don't know why. I I I don't know. I
think I'm fair with everyone. I just ask I just ask questions I genuinely care about. I don't come in with huge
about. I don't come in with huge preconceptions or at least meet people for the first time. But I've heard through the grape vine um that he doesn't want to do mine. I
mean, going back to what you were saying earlier that with this the way that OpenAI and these companies control research, you asked, do they also do this with journalists?
I mean, yes, the answer is yes. And
apparently they they also do it with anyone who has, you know, a broad mass communications platform.
It's not just about the conversation that you're going to have with them.
It's about who you also choose to platform.
And there's this huge problem in technology journalism where companies know that a really big carrot that they can give to technology journalists is access.
Yeah. Yeah. Yeah.
And they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to.
This is so true. And I don't think the average person really truly understands this.
Yeah. So, this kind of sounds like theory as you say it, but I'm not going to name names here because I don't think it's important, but there is a
particular person in AI who um whose team have basically dangled the carrot of them coming here for like 18 months.
And I'm like, you don't you don't have to dangle the carrot. I'm going to speak to whoever I want to regardless of the carrot or not. And when this person comes, if they want to come, I'll I'll give them a fair shot. I'll ask them all genuinely curious questions about what
they're doing, their incentives. I won't
gotcha them. I don't have a history of ever gotchering anybody. Even if I dis like even if I have a different of opinion, I'll ask the question.
Yeah.
But they dangle carrots and they say, "Well, if you know he he's thinking about it, let's think about a date." And
what what the strategy is, and I don't think they they think those people don't understand, is if we just dangle it for long enough, then they will um perform in the way that we want them
to do and they'll be they'll be pleasant about us. They won't
be critical. They won't give a give a critics.
Our critics.
And I think a lot of their game is just dangle the carrot forever.
Yes. Yeah.
That's like the optimal outcome is if we just dangle it. If we just tell them, yeah, look, we're just trying looking at the schedule.
It just doesn't work. I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum, let the viewers un decide for themselves.
Yeah.
What they think.
Yeah.
Um, but this is a Yeah. This is such a huge part of their machinery is the way that they use these tactics to massage the public image of these companies and make sure that information that they
don't want out and even opinions that they don't want out there go out there.
Mhm.
And so this is this is you know I feel very lucky now that opening I shut the door early on me at the time I didn't feel lucky. I felt
like I had screwed myself over. I was
nicer access to a journalist, right? Like you're
supposed to report the truth and you're always supposed to report in the interest of the public. Like that is the point of journalism. And in that moment it I I was like relatively junior in my
career. I was like, did I misunderstand
career. I was like, did I misunderstand what journalism about is is about? Like
should I have actually been playing the access game?
Mhm.
But it was too late. I had the door shut to me and so I had to build my career understanding that the door the front door was never going to be open.
Yeah.
And that actually really strengthened my own ability to just tell it like it is like objective. Yeah. And just report
like objective. Yeah. And just report what I see are the facts being presented to me irrespective of whether the company likes it or not. And most often the company really does not like it but
I can continue to do the work. They
don't need to open the front door for me. I was still able to do more than 300
me. I was still able to do more than 300 interviews.
So Sam Alman gets kicked off the OpenAI executive team.
Did you find out why that happened?
Yeah, there's a scene by scene recounting from who? I can't remember the exact
from who? I can't remember the exact number of sources, so I don't want to misquote myself, but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making
process.
So, Ilia Satskever is seeing these serious concerns about the way that Altman's behavior is
leading to bad research outcomes and poor decision-m at the company.
He then approaches a board member, Helen Toner. Ilia, for anyone that doesn't
Toner. Ilia, for anyone that doesn't know, is the the co-founder we mentioned earlier. The co-founder of OpenAI we
earlier. The co-founder of OpenAI we mentioned earlier.
Yes. And he kind of does a bit of a sounding board thing to Helen just because Ilia is freaking out. He's like
he's been like sitting on this these these concerns for a while and he's like if I tell this to someone, this could also be really bad for me if Alman finds
out.
And so he asks for a meeting with Toner and in that first meeting he's like re like he barely says a thing. He's
just like dancing around trying to figure out hey is this someone that I can maybe trust to divulge more information.
And Toner's role and responsibilities at OpenAI were she was a board member.
Just a board member.
Yeah. And and specifically an independent board member. So opening eye when it was a nonprofit the board was split between people who had a stake financial stake in the company and then people who were fully independent and
this was meant to be a structure that would balance the decision-m to be in the benefit of the public interest rather than to be in the benefit of the for-profit entity that opening I then created
and Ilia as a non-independent board member was approaching toner as an independent
board member her to try and see whether or not she was potentially seeing or hearing the same things that he was about the effect that Alman was having
on the company. This then sets off a series of conversations first between Ilia and Helen and then between Amir Moratti and some of the board members.
Samir Moratti was at that point the chief technology officer of OpenAI where these two senior leaders essentially through these conversations and through documentation that they're pulling together like email, Slack messages and
so forth, they convey to the independent board members, three independent board members, we are very concerned about
Altman's leadership like he is creating too much instability at the company and it is like he is the root of the problem. It's not they they they were
problem. It's not they they they were trying to say to these independent board members like the problem will not be fixed unless Alman is removed because of
the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really really
important technology. When you say
important technology. When you say instability, that's a that's quite a vague term. That
could mean lots of things. Like
instability could mean pushing people hard to work harder, right?
What do you mean by instability in spec as specific terms as you can possibly say them?
When chat GBT came out in the world, OpenAI was wholly unprepared.
They didn't think that they were launching a gangbusters product.
Yeah. They thought they were releasing a research preview that would help them get the data flywheel going, collect a bunch of data from users that would then inform what they thought would be the
gang busters product, which was a chatbot using GPT4 and chat GBT was using GPT 3.5.
And because of that, there were servers crashing all the time because they they weren't they had to scale their their infrastructure, you know, faster than
any company in history. And there were um there were all of these outages. They
were trying to also hire faster than any company in history to try and have more personnel there. And they were then
personnel there. And they were then sometimes hiring people that they were like, "Actually, we made a mistake. We
shouldn't have hired you." So they were firing people left and right. and people
were just disappearing off of Slack and that's how their colleagues would learn that they were no longer at the company.
And so it was yes like many fast growing companies a very chaotic environment and a particularly chaotic environment
because it was extra fast like they had to accelerate more than any other startup.
And on top of that mirror Morati and Ilasgiver felt that Alman was making it worse like he was not actually effectively ameliorating the circumstances of the chaos. He was
actually sewing more chaos, getting these teams to be more divided.
And this is where it's important to understand that the executives and the independent board members, they're all operating under this idea that they're
building AGI and that AGI could either be devastating or utopic to humanity.
And so it's not yes it's like any other company and no it's not like any other company. You cannot have like in their
company. You cannot have like in their view you cannot have this degree of chaos as the pressure cooker for creating a technology that they in their conception could make or break the
world.
And so that is basically what the independent board members also begin to reflect on. They have these
reflect on. They have these conversations amongst themselves where they're like, "Well, based on what we're hearing about Altman's behavior, like if this was an
Instacart, would that warrant firing him?" And they concluded, "Maybe not,
him?" And they concluded, "Maybe not, but this is not Instacart."
And that's why they were like, "Well, crap. Maybe this is actually this does
crap. Maybe this is actually this does rise to the to the bar where we should consider replacing him because we are ultimately building a technology that we
think could have transformative impacts either in the positive or negative direction. And so that is what happens.
direction. And so that is what happens.
It's like these two executives and then the independent board members also they were hearing other feedback as well from their connections within the company with other people in the industry. At
one point, Adam D'Angelo, who is one of the independent board members and the CEO of Kora, uh, which is, you know, start a tech startup in the valley, he
is at a party in San Francisco, and he starts to hear some of these rumors that there's something weird about the way
that OpenAI has structured its OpenAI startup fund, which was this fund that they the company had created to start investing in other startups.
Mhm.
and he realizes they'd never really seen documentation about how the startup fund had been set up from Alman. And finally
they get the documents and it turns out that OpenAI startup fund is not OpenAI's startup fund. It's Altman's startup
startup fund. It's Altman's startup fund. And this was something like one of
fund. And this was something like one of several experiences that the independent board members were also having where they're like there's something not right
about the fact that there continuously are inconsistencies inconsistencies between the way that Altman is portraying what is being done versus what is
actually being done. And so when these two executives approach the board or the independent board members, then they're like, "Okay, this lines up with also the
experiences that we've been having."
And at that point, they then have this series of very intense discussions where they're meeting almost every day talking about should we actually really consider
removing Altman?
And in the end they conclude, yes, we should. And if we're going to do it, we
should. And if we're going to do it, we need to do it quickly. Because they were very concerned that the moment that Alman found out, his persuasive abilities would make it impossible to
do. And so they end up firing Altman
do. And so they end up firing Altman without telling anyone. You know, they don't talk to any stakeholders to get them on the same page. Microsoft gets a
call right before they execute the action saying, "We're going to fire Altman."
Altman." And Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time.
Yes. One of the only investors in OpenAI at the time. And that is what then devolves the whole thing because every single person that is affected by this
decision is now extremely angry that they were not involved. And that is what then creates this campaign to bring Altman back. And then Alman is
Altman back. And then Alman is reinstalled as CEO days later.
This company that I've just invested in, it's grown like crazy. I want to be the one to tell you about it because I think it's going to create such a huge productivity advantage for you. Whisper
Flow is an app that you can get on your computer and on your phone on all your devices and it allows you to speak to your technology. So, instead of me
your technology. So, instead of me writing out an email, I click one button on my phone and I can just speak the email into existence and it uses AI to clean up what I was saying. And then
when I'm done, I just hit this one button here and the whole email is written for me. And it's saving me so much time in a day because Whisper learns how I write. So on WhatsApp, it knows how I am a little bit more casual.
On email, a little bit more professional. And also, there's this
professional. And also, there's this really interesting thing they've just done. I can create little phrases to
done. I can create little phrases to automatically do the work for me. I can
just say Jack's LinkedIn and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life. This is
saving me a huge amount of time. This
company is growing like absolute crazy.
And this is why I invested in the business and why they're now a sponsor of this show. And Whisper Flow is frankly becoming the worstkept secret in business productivity and entrepreneurship. Check it out now at
entrepreneurship. Check it out now at whisperflow spelled w i s p r l o w.ai/steven.
w.ai/steven.
It will be a game changer for you.
There's a phase a lot of companies hit where they're no longer doing the most important thing, which is selling. And
they get really bogged down with admin.
And it's often something that creeps up slowly and you don't really notice until it's happened. Slowly momentum starts to
it's happened. Slowly momentum starts to leak out. This happened to us and our
leak out. This happened to us and our sponsor Pipe Drive was a fix I came across 10 years ago. And ever since my teams across my different companies have continued to use it. Pipe Drive is a simple but powerful sales CRM that gives
you the visibility on any deals in your pipeline. It also automates a lot of the
pipeline. It also automates a lot of the tedious, repetitive, and time-conuming parts of the sales process, which in turn saves you so many hours every single month, which means you can get back to selling. Making that early decision to switch to Pipe Drive was a
real gamecher, and it's kept the right things front of mind. My favorite
feature is Pipe Drive's ability to sync your CRM with multiple email inboxes so your entire team can work together from one platform. And we aren't the only
one platform. And we aren't the only ones benefiting. Over 100,000 companies
ones benefiting. Over 100,000 companies use Pipe Drive to grow their business.
So, if something I've said resonates, head over to pipedive.com/ceeo
where you can get a 30-day free trial.
No credit card or payment required.
How does a CEO of a major company get fired by the board? Because board
members, there's a quote in your book on page 357 where you say about Ilia saying, "I don't think Sam is the guy who should have the finger on the button for AGI." Now, I I asked myself this
for AGI." Now, I I asked myself this question. You know, I work with lots of
question. You know, I work with lots of people here. We have 150 people that
people here. We have 150 people that work in this business and those people know me best.
Yeah.
They see me on camera. They see me off camera. So if they said that we don't
camera. So if they said that we don't think Steven is the right person to host the direc Yeah.
It would take a lot for them to say that.
Yeah.
They must have seen some off camera for them to go we don't think he's the right person to be on camera. Yeah.
Or for whatever reason. And in the case of AI, which is much more consequential than a podcast that is, you know, filmed in my old kitchen. Um it almost sends a chill down one's body to think that the
co-founder of a business has gone to the board and said this isn't the guy to lead this consequ I mirror Marotti then also said I don't think Alman is the right guy
and then they both left later.
So then Altman comes back and lo and behold Ilia never comes back. So his
concerns about the fact that Alman founding out would be bad for him manifested. He ended up not coming back
manifested. He ended up not coming back and Miriam Marotti then left shortly thereafter.
Quite a lot of these people leave, don't they? Open AAI
they? Open AAI they do. So if you consider
they do. So if you consider one of the origin stories of open AI is this dinner that happened at the Rosewood Hotel,
which is a very swanky hotel um right right in the heart of Silicon Valley that uh was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area. And there was this dinner that
Bay Area. And there was this dinner that was there where Altman was intending to recruit the OG team that would start
OpenAI. So he's kind of telling everyone
OpenAI. So he's kind of telling everyone you might have a chance to meet Musk because Musk is going to come to this dinner dinner. And he cold emails Ilia
dinner dinner. And he cold emails Ilia and gets Ilia to then come because and Ilia specifically wants to come because he wants to meet Musk. And he also emails all these other people including
Greg Brockman, Dario Amade. These are
all people that ended up working at Open and they all almost all of them not not every one of them but almost all of them end up working at OpenAI and leaving
almost all of them end up leaving specifically after they clash with Alman and Ilia he left and launched a company called Safe Super Intelligence.
Yeah.
Which is I mean that's an indirect if I've ever heard one. Do you know what I mean? Do you know what I mean? If
mean? Do you know what I mean? If
someone like co-ounded this podcast with me and then they left and started a podcast called Safe Podcasting, I I'd take that as a slight.
I' I'd have people knocking on their door and asking for their texts. One of
the things that is happening here is it is not a coincidence that every single tech billionaire has their own AI company.
Mhm.
They want to create AI in their own image and that's why they keep not getting along. And in fact, it's not
getting along. And in fact, it's not just don't get along, they end up hating each other after working together.
Mhm. and then splinter off into their own organizations. So after Musk leaves,
own organizations. So after Musk leaves, he starts XAI. After Dario leaves, he starts Anthropic. After Ilia leaves, he
starts Anthropic. After Ilia leaves, he starts Safe Super Intelligence. After
Meera leaves, she starts thinking machines lab. They want to have control
machines lab. They want to have control over their own vision of this technology. And the best way that they
technology. And the best way that they have derived from their experiences of trying
to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there. Do you think some of these AICOs realize that they are quite literally summoning the demon as
Elon said 10 years ago, but they don't really care because being the person that summoned the demon is makes you consequential and powerful and
historical even if the outcome is potentially horrific. Even if there's
potentially horrific. Even if there's like a 20% outcome of it being horrific.
I remember I think it was Dario, he's the one that said there's somewhere between a 10% and 25% chance of things going catastrophically wrong on the
scale of human civilization. 25% is a one in4 chance.
If you put bullets in a fourchamber revolver and said Steven, the upside is you could become a multi-gazillionaire and be remembered forever. The downside
is that there would be a bullet in your head. There is no chance that I would
head. There is no chance that I would take take that bet with a 25% potential chance of things going catastrophically wrong.
So, I have a very long answer to this because do they know if they're summoning the demon? It really depends on what we
demon? It really depends on what we define as summoning the demon. And in
this particular case, to go back to what we were saying before, there's a mythology that the AI industry uses where summoning the demon is an integral
part of convincing everyone that therefore they can be the only ones that are developing this technology.
I got it. So on one end, you got to say if we don't, China will and that's terrible.
Yeah. But if we let anyone else do it other than me, then we're as well.
Exactly.
So that means that I have to do it and you have to give me money and support.
Exactly. So when they're saying these things, we should understand it as not as like a genuine prediction based on what they're seeing because first of all, we don't
predict the future. We make it. We
should understand this as an act of speech to persuade other people into believing that they should seed more power, more resources to these
individuals. And so, do they know that
individuals. And so, do they know that they're summoning the demon?
I mean, they are purposely trying to create this this feeling within the public that they are because it is a crucial part of their
power.
But do they if we were to define just do they realize that the things that they are doing are having already really harmful impacts all around the
world on vulnerable people, vulnerable communities, vulnerable countries.
That's where I'm like maybe yes, maybe no. and they don't really care because
no. and they don't really care because in the frame of mind like I sometimes use the analogy that the AI world is like Dune.
Dune for anyone that doesn't know Dune science fiction epic written by Frank Herbert and it's set in this intergalactic era where there are all these houses and they're fighting each
other for spice. So it's a call back to colonialism and empire and they all are trying to control the spice. But one of the features of this story is that there
are these myths that are seated on the different planets about a a religious myth basically about the coming of the Messiah that are used as ways to control the people.
And Paul at Trades when he arrives at the planet Iraqis uh with with the intention of um trying to then fight
against the empire and um avenge his father's death. He steps into a myth
father's death. He steps into a myth that has been seated on this planet that says that one day there will be a Messiah that comes and saves the planet.
So he steps into the role of the Messiah and leans into this idea in order to better control the people and rally them behind him as a leader to help with this
quest.
He knows that it's a myth in the beginning, but because he lives and breathes and embodies it, it kind of starts to blur in his mind whether this
is really a myth or whether he's really the messiah. And this is what I think
the messiah. And this is what I think happens in the AI world. On one hand, there are all these executives that
actively engage in mythmaking because, you know, I have all these internal documents that I write about in the book where they are very keenly aware of how
to bring the public along with them by showing them dazzling demonstrations of the technology by using crafting a mission that will sound really good uh
and and and make people give more leniency to their companies. So they
know they're doing the mythmaking and also I think many of them lose themselves in the myth because they have to live and breathe and embody it day in
and day out. And so when you know Daario says he thinks that 10 to 25% of the future could be catastrophic or or whatever the probability is 10 to 25%.
He is actively engaging in the mythmaking but also he's losing himself in the myth. Like I think if you were to ask him, "Do you genuinely believe that?" He would be like, "Yes, I
that?" He would be like, "Yes, I genuinely believe that." Because there's been a blurring of when he's saying something just to say something versus
when he actually believes what is he's required to believe in order to then continue doing the things that he's doing.
And this is the whole psychology of cognitive dissonance, right? where you
the brain struggles to hold two conflicting worldviews at the same time.
So it's it's incentivized or it endeavors to dismiss one. So if you you know if you wanted to be a healthy person but also a smoker. Um and I pointed out that smoking is bad for you.
The first words out of your mouth are going to be yes but smoking helps me with stress. Yeah, but
I only do it when I think I don't know I kind of see that at the moment because these companies have to raise extortionate like huge amounts of money to fund their AI research and they're
building out all of these data centers.
So when they're out in the public, they're always fundraising. All of these major companies are fundraising all the time at the moment.
So you can't be fundraising and saying, "I'm going to destroy your children's future potentially. There's 25% chance
future potentially. There's 25% chance that your children aren't going to have a great life."
Which might be the truth. I mean that is actually what they say Dario. This is
what famously Dario Amade does. He's
like he does that but the others Sam's not doing that as much anymore.
Yes. And it's because you know it goes back to like each of them kind of distinguish themselves a little bit as as the brand that they need to project.
Do you think any of them are more have a stronger moral compass than others? cuz
I think Dario often gets the credit for having more of a, you know, more of a backbone and being more conscious of implications.
He does get a lot of credit for that.
He's from Claude and Anthropic. For
anyone that doesn't know, I don't think it truly matters that question, the answer to that question, because to me,
even if you were to swap all the CEOs for someone that people would say is better at running these companies, it doesn't fix the problem that I identify in the book, which is that there is a
system of power that has been constructed where these companies and the people running these companies get to make decisions that affect billions of people's lives. lives around the world and those billions of people do
not get any say in how it goes.
Those people, they can go to the polls, right? So, if the public are
right? So, if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws.
Yes.
But at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money, hundreds of millions in this upcoming midterms to
try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage.
And so to me, I think sometimes as a society, we obsess a little bit with are these leaders good or bad people?
And to me the bigger question is is the governance structure that we've created a sound one or that allows broad participation or an anti-democratic one that has consolidated this
decision-making power in the hands of the few because no person is perfect. It
does I don't I don't care who is on at the top of these companies. they're not
going to have the ability to make decisions on behalf of so many people around the world who live and talk and um and and have a culture and history
that are fundamentally different from them without things going wrong.
And so that is why throughout history we've moved from empires to democracy.
It's because empire as a structure is inherently unound. it does not actually
inherently unound. it does not actually maximize the chances of most people in the world being able to live dignified lives.
I'm going to try and take on their point of view. So, this is me playing devil's
of view. So, this is me playing devil's advocate. Okay. But Karen, if the US
advocate. Okay. But Karen, if the US don't continue to accelerate their research with AI, at some point, China's model is going to become so smart and
intelligent that we're basically going to have to rent it off them and we're going to be, you know, they'll get the scientific discoveries. They'll discover
scientific discoveries. They'll discover the new era of autonomous weapons and we will be their backyard. And like
logically that argument does appear to be pretty true.
No, it's not.
If we scale up, if we just imagine any rate of change with this intelligence, at some point we're going to come to a weapon that could theoretically disable um all of the United States electricity,
their weapons systems. It would know exactly how to disable the United States from a cyber perspective because it would be that smart. All you've got to imagine is any rate of improvement of
any period any sort of long period of time. So this is a theory that might be
time. So this is a theory that might be true and if it's true I mean yeah any theory might be true but but if but but you know again going to this point of like even if it's a
small percentage it's worth paying attention to on the other side of the foot. This is a theory that people talk
foot. This is a theory that people talk about. It could be the case that the
about. It could be the case that the most intelligent civilization is going to be the superior civilization.
Logically, that's a pretty sound thing to say. No.
to say. No.
So, there's a lot of a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument. And let's knock them
down one by one. So the first one is that these systems are intelligent and that just scaling them is going to bring us more intelligence.
So far so true.
No, it's actually not because first of all again we don't actually know if these systems are like intelligence is not it's not like the right analogy
almost. It's sort of like
almost. It's sort of like it's like is a calculator a calculator can do math problems faster than a human. Does that make it intelligent?
human. Does that make it intelligent?
It has a narrow intelligence because they're solving a narrow problem which is like 1 plus 1 equals 2. But
and these systems, they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for
some people. This is like the jagged
some people. This is like the jagged frontier of these AI models like some of the capabilities are quite good, other capabilities are not that good. You know
why that happens? is because the company can only focus on advancing certain types of capabilities. It can't
literally focus on advancing all types of capabilities. They have to actually
of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking uh you know getting a bunch of human contractors to
annotate and train the model to do that exact thing. And so
exact thing. And so scaling these models is actually a perpendicular question to are we actually getting
more cyber capabilities specifically and more military capabilities specifically.
I would argue that most of the most of the top people in AI believe that the intelligence is going to continue to scale for some time. a lot of them do like Jeffrey Hinton does.
And again, it's it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is. His hypothesis throughout his career has been the brain is a
statistical engine.
But that's his hypothesis and that is not universally agreed upon especially among people that are not in the AI world. When you talk with
world. When you talk with neuroscientists and psychologists, people who actually study human intelligence in the human brain, that is where you start to get a lot of debate
and disagreement about this particular view that Hinton has. And so this is kind of like one of the one of the things is like AI
is already being used in the military and has been used in the military for a long time. But ex specifically
long time. But ex specifically accelerating large language models isn't just the only path for getting military cap. like the companies would
military cap. like the companies would have to choose to specifically pick military capabilities to accelerate not just like general intell it's like you know what I'm saying like they create
this myth that they are actually pushing the frontier of all of the capabilities of the model but that's not what's actually happening internally and I have I had hundreds of pages of documents on like how they were specifically training
models they pick what capabilities they want to advance and you know how they pick them it's based on which industries countries would be able to pay them the
most money for their services. So they
pick finance, law, medicine, healthcare, commerce. It's not actually intelligent
commerce. It's not actually intelligent like a like a a baby where you the the more that you that the baby grows up, they start having this like general these general abilities.
I think I have jagged intelligence. I'll
be honest. I wasn't going to say it, but I think I know a little I know a little bit about uh No, I know a lot about a little bit.
Yeah, but if but you also have the capability to learn and acquire knowledge by yourself. And you also have the ability to choose what you're going to learn and acquire by yourself.
It's not easy and it takes a lot more time than these models. It seems less compute but and you can learn how to drive in one place and then immediately know how to drive in another place. These models
cannot do that. Every time a self-driving car is shifted to another location, it has to completely retrain on that location. It's like all the self-driving cars. I mean, we're sitting
self-driving cars. I mean, we're sitting in Austin right now and there's all these self-driving cars that are driving through Austin.
But when one of them learns, they all learn which is which well it's just because it's a it's an operating system that is has an AI model as part of it and you're training the AI
model and then you deploy that AI model across all the self-driving a big advantage because if one optimist robot learns one thing in one factory they all learn it and imagine that
imagine if humans if we all learned what all the other humans learned that would be that would give us such an unbelievable competitive advantage. I
mean one of the ways we did that is through communication.
They could not because they could be learning the wrong thing which has also happened again and again with these technologies is that all of them then learn the wrong thing and they all have the same failure mode. I mean part of the resilience of human society is that
we do have different expertises and we also have different failure modes.
I think sometimes we hold AI models to a higher standard than we hold humans to.
And in a weird because I I' I'd hear on stage we're in we're in Austin at the moment and I'd hear people go ah but you know them AI models they hallucinate sometimes. I'm like, "Have you met a
sometimes. I'm like, "Have you met a human?" Like, I I hallucinate all the
human?" Like, I I hallucinate all the time. I can barely spell or do math.
time. I can barely spell or do math.
So, yes, but it's it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies. like
we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a a way to try and gauge whether or not it is good or
worthy or capable in society. I think
the output is the thing that really m is the most consequential which is like okay it might have a different brain and a different system but does it arrive at the same capability like does it is it able to do surgery on someone's brain is
it able to drive a car like my car drives itself in in Los Angeles I don't touch the steering wheel and I can drive for many many hours and in here in Austin I just saw the ones the other day where they've removed the steering wheel
and the pedals the new cyber cabs so I go it doesn't really matter if it's using a different system if it's navigating through the world as a car it has a better safety record than human beings Um then as far as I'm concerned,
intelligence or not, it's like yes, you know, but that was not the original argument that you made, which was like these systems are just generally going to become more intelligent across different things based on the prediction. This is
a prediction that you're making, right?
Like that and this is a prediction that all the AI um Ilia's making, Dario's making, Elon's making, Zuckerberg's making, man's making, Dennis is making.
And do you know what the common feature of all of them is? They profit
enormously off of this myth.
Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis housing a 100,000 GPU specifically to scale up their API models faster than their
competitors. It appears that they've all
competitors. It appears that they've all converged around this idea that you can brute force your way to greater, more generalized intelligence. They've
generalized intelligence. They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are that are financially lucrative.
And I heard Elon say that if you're a surgeon, there's just no point. He was
like, don't train to be a surgeon. He
says in a couple of years time, Optimus and AI generally are going to be better than any surgeon that's ever lived.
Yeah. You know,
do you think these things are true?
Well, you know, I I'm pretty sure it was Hinton that famously slash infamously said there would be no need for radiologists anymore.
There would be no need for radiologists anymore in he set a deadline that we've already passed. I don't remember how
already passed. I don't remember how many years.
Radiology is doing great as a profession.
Do you think it will be in 5 years?
Okay. So, this this once again goes back to this question of like why do we build technology and why should we specifically be building AI? Okay. And
for me like the whole project of technology development advancement is not to advance technology for technologies sake.
It's to help people.
And there have been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have
the AI model in their hands and for the for the human expert to use the AI model as a tool as an input into
their judgment. And it is that
their judgment. And it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognosis of the patient.
Do you believe that in the coming years all the cars pretty much all the cars on the road will be driving themselves?
No.
You don't you don't think so?
Mm-m.
How come?
Because of the way the technology works.
Because because these are statistical I mean currently the way that AI models are primarily developed. They're
statistical engines. You have what's called a neural network, which is a piece of software that has a bunch of densely connected nodes and like parameters. Is this what they call
like parameters. Is this what they call parameters?
Yeah, pretty much. And you're just pumping a bunch of data into it and then it's analyzing the data and creating this all of these finding all these correlations in the data, finding all
these patterns and then it's through those patterns that the machine is then able to act autonomously, right? And so
the way that they're training a self-driving car is they're they're recording all this footage and then they have tens of thousands or hundreds of
thousands of human contractors that draw literally around every single vehicle in the footage, every single pedestrian, every single traffic light, every single
lane marking and label it exactly as such. So that then it's fed into an AI
such. So that then it's fed into an AI model that can identify all of these different components and then it's connected to another piece of software
that is not AI that's saying okay if you if the AI model recognizes the pedestrian we do not run over the pedestrian.
If the AI model recognizes a red traffic light we stop. And so the like the thing about statistical engines is that it's based on probabilities. It's not based
on deterministic logic.
So systems make errors all the time and it's impossible. It is technically
it's impossible. It is technically impossible to get them to stop making errors.
Humans make errors way more than systems in this case. Like the safety record is like isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is to for a human to drive?
It depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving.
Get drunk because if it's in Mumbai, in some place in Vietnam, no, it would not be safer. I WOULD MUCH RATHER be driven
by someone that has been driving in that place their whole life. I'm I'm not arguing against like the fact that in certain places where the car has been explicitly trained to drive in this
place that it has a better safety record than the humans that are driving in that place. But you specifically asked if I
place. But you specifically asked if I think that all of the most cars most cars in the world in the US in the United States cuz we're here.
I don't actually think that it's like imminently on the horizon 10 years.
No, I don't think so.
I sat with Dra from Uber and he's pretty convinced that his 9 million couriers will be replaced by autonomous vehicles.
I mean, how long have has self-driving cars been invested in thus far? It's been more than 10 years. And what percentage of cars right now are autonomous
on the US roads? I mean, so part of it is it's actually not a technical problem, right? Like part of it is also
problem, right? Like part of it is also social problem like do people even trust getting into these vehicles? Part of it is also a legal problem which is if the car the self-driving car kills someone,
which it has happened.
Yeah, it has happened.
Who is responsible? So, in the case in LA, it was both Tesla and the driver because the driver dropped their phone, they looked down, and this was a couple of years ago, I believe. Um, and they went to grab their phone and they hit
someone, and so it went to court, and they were held both responsible, both the driver and Tesla. Um, in terms of Tesla, pretty much everyone that gets the car,
it comes with autonomy now for pretty much most people, I believe.
Partial autonomy. Yeah, it's called full self-driving at the moment where it's like I mean, yes, it is called full self-driving.
Full self-driving supervised where you kind of have to be looking in the d. You
have to be looking in the right direction but Yeah. So, it's partial autonomy.
Yeah. So, it's partial autonomy.
And here in Austin, it's full autonomy cuz there's no steering wheel.
Yeah.
On the new car. Um, so you can't drive it anyway. But it is, you know, the
it anyway. But it is, you know, the Model Y is the undisputed highest selling car, bestselling car in the world across all brands. Well, I guess
my point here is like these predictions where they say AI is going to completely change transportation and driving. It's
going to completely change lawyers aren't going to have jobs. Accountants
aren't going to have jobs. Um, do you believe that they are true? Do you
believe that there's going to be mass job displacement?
Okay, so I do think that there is going to be huge impacts on employment and we already seeing those impacts.
It is not simply because the AI models are just automating those jobs away. It
is specifically because the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on. And
executives at other companies are then deciding to fire or lay off their workers because they think that AI can replace the worker irrespective of whether that might be true. And there,
you know, there have been cases of like the CLA CEO who laid off a bunch of people thinking that he would replace everyone with AI and then it didn't actually work and he had to ask some people to come back.
I actually DM'd him about this. If
you're hearing this, this is because I've DM'd Sebastian and he's fine with me sharing this.
He said, because I've heard his name mentioned a lot and so when I when we talked about AI in the past and people mention Sebastian and Cler as the example, I wanted to clarify with him what the truth was.
He said, "It's great to hear from you.
Um, I think sometimes people struggle with two things can be true at the same time. I think it might be time to come
time. I think it might be time to come back on your podcast.
To your point, this is the media misinterpreting my tweet. We are
doubling down on AI more than ever. Cler
is shrinking with almost 100 employees per month due to AI. We used to be 7,400 at the peak. A year ago, 5,500. Now
we're 3,300.
And by the end of summer, so this was last year, will be 3,000 people. AI
handles 70% of our customer service conversations at this moment. This is
because we have realized that with AI, the production cost of software comes down to almost zero. Just like
manufacturing used to be all handcrafted and then the machines came. Code used to be all handcrafted up until a few years ago. And now it is machine produced. And
ago. And now it is machine produced. And
ultimately we pay people more than ever for the unique handcrafted man-made stuff. China is a bank. People will want
stuff. China is a bank. People will want to connect to humans not only machines.
They want us to be personable, relatable, even flawed. So we need to make sure while we are automating replacing with AI in parallel, we make
sure we offer a super available human experience. I'm really glad you read
experience. I'm really glad you read this because I think it touches on some really important nuances to the AI. Yeah. Like the impact that AI is
the AI. Yeah. Like the impact that AI is going to have on employment. So I think the there's often these binary narratives. It's like AI is going to
narratives. It's like AI is going to come for every job.
Mhm.
Or people say AI is not actually working and it's not actually coming for jobs.
And like the reality is it's coming for jobs. There are definitely jobs that are
jobs. There are definitely jobs that are being automated away because of the capabilities of their models. And
there's also jobs that are being lost because executives are deciding to lay off the workers even if the models don't match the capabilities because it's good enough. Like they would rather have the
enough. Like they would rather have the good enough model for way cheaper or they made a mistake with hiring. They
blowed their team and it's a great convenient thing to say.
Exactly. Like there's there's there's many reason but like clearly we're already seeing impacts on the job market. Like the um US jobs report that
market. Like the um US jobs report that came out earlier this year showed that there has been a decline in hiring is a
slowdown in hiring across especially white collar professional industries.
And you saw Anthropic's report the new this week. The TLDDR is it matches kind
this week. The TLDDR is it matches kind of what you were saying where they Anthropic looked at exactly how people were using their models and they looked at like what people are saying.
Yeah.
And they said that there's been a 40% reduction in entry- level jobs in particular and then they made this graph which has gone viral over the internet.
The red shows where we are now in terms of capability and based on how people are currently using the models they prediction extrapolated out that the blue part will be the disrupted parts. This is the
things that they say AI can do right now, but people don't realize it yet.
So, if you look at it, it's like it's kind of all the stuff you would expect.
Yeah.
It's the physical real world human stuff which robots maybe can do someday like construction or agriculture that are untouched, but like office and admin, um like saying finance stuff, math,
and notice that these are all the things that I just named that they purposely finance, math, law, media and arts. That's me cooked.
Yeah.
office and admin. I mean they do focus a lot on like assistant type and managerial work.
So but but the the other thing that the CLO CEO said was but people also want human experiences.
So it's not actually just about the capabilities of the models. It's also
about what people want like some things they would turn to AI for and some things they wouldn't irrespective of whether or not AI is capable of doing it
but because of a preference that they want humanto human interaction and so what we're seeing right now is yeah the the thing that happens with
every wave of automation which is that there is a bunch of entry-level work that gets automated away and there There are also new jobs created, but the jobs that are created are one in one of two
categories. There are people that get
categories. There are people that get even higher skilled jobs and what he was saying like we pay people more for like the handcrafted code now and there's also the people who get way
worse jobs and so there was this amazing article in New York magazine that was talking about how a lot of people are getting laid off and then they end up
working in data annotation which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies
are trying to automate. And so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that
they were just laid off in which will then perpetuate more layoffs if that model then develops that skill. And the article was talking
that skill. And the article was talking about how this has become a huge catchall for a lot of people that are
struggling with finding job opportunities right now, including like awardwinning directors in Hollywood that are actually secretly doing this data annotation work to put food on the
table. And so when they talk about
table. And so when they talk about there's going to be mass unemployment and then there's going to be some new jobs created that we can't even imagine, I think a lot of these narratives rarely
talk about like first of all, why are some jobs going away? It's not just because of the model capabilities, it's also because of executive choices and because of the rhetoric that they use if they want to just downsize. Um, but the
other thing that is rarely talked about is the jobs, a lot of the jobs that are created are way worse than the jobs that were there and it breaks the career ladder. So,
it's the entry level and the mid tier jobs that get gouged out. It's higher
order jobs and then way more lower order jobs that get created. And so, how do people continue to progress in their careers? There's no more rungs on the
careers? There's no more rungs on the ladder.
I actually don't know the answer to this question. And I've been furiously trying
question. And I've been furiously trying to find a good answer to this question because I can, you know, everything is theory. And for my audience, I would say
theory. And for my audience, I would say most of my audience don't run businesses. A lot of them do, a lot of
businesses. A lot of them do, a lot of them aspire to, but they don't run businesses. So, they're kind of, they're
businesses. So, they're kind of, they're also in the land of theory. They're
hearing lots of different things. Jack
Dorsey does his tweet saying he's halfing his headcount because of AI.
They don't know what's true. They don't
know the sort of internal economics at Jack's company and did he bloat the company during the pandemic and he's just using this as an excuse to make this share price spike seven points because his investors now think they're an AI company or whatever.
Mh.
It's hard to pass through. So eventually
I go, okay, what am I doing?
I have hundred hundreds of team members, probably 70 companies I invest in, maybe five or six that I'm like the lead shareholder in. What am I actually doing
shareholder in. What am I actually doing on a day-to-day basis right now? I am
I'm also I also consider myself to be head of recruitment but in the last month in particular I have met extremely capable candidates in terms of cultural alignment hard work those kinds of things but I've had to
take a great deal of pause because when I run the experiment of can I get an AI agent to do that exact same thing the answer is increasingly yes especially in a world of open clause
and so what I'm curious like now you confront this decision where you're seeing in this short-term period you could just choose the AI agent
and in the long-term period there is no career ladder. So, so who are you promoting into these senior roles? Like what how do you resolve it
roles? Like what how do you resolve it for your own company?
Yeah, it's a good question. So, there's
kind of two ways I'm thinking about it.
I think really deep expertise is very very valuable because if you're now the orchestrator of potentially AI agents, it's really about um having a deep understanding of the right question to ask and and that's someone who has deep
expertise on something. So I need my CFO because if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else, she needs to understand what to tell them to do in our company.
Mhm.
And in turn financial analysts can't do that. They need this the 50 odd years of
that. They need this the 50 odd years of experience that you know CLA has. On the
other end, I need Cass. Cass is 25. Cass
knows everything about AI agents. He's a
young Japanese kid who's highly highly curious. You know, on the weekend, he's
curious. You know, on the weekend, he's building AI agents to solve problems in my life. I need those two kinds of
my life. I need those two kinds of thinking, which is highly proficient agent maxing young kids or they don't necessarily need to be young, but like really lean in high curiosity. That's
creating a force multiplier in my business. And then I need deep
business. And then I need deep expertise. Now the everything else
expertise. Now the everything else outside of there is another one I've thought of another group is like people with extremely great IRL people skills because we do meet people in real life.
We greet you when you arrive here. We
greet we when we go for lunch with big clients that we have whether it's Apple or LinkedIn or whoever it might be. We,
you know, we need to smoosh.
Mhm.
And we have teams who, you know, are in person in the office. So, we we do a lot of stuff IRL and increasingly we're building communities even for this show.
We're doing community events all around the world. So, we need people that are
the world. So, we need people that are good at that as well. IRL, bringing
people together in real life and organizing stuff. Those are the three
organizing stuff. Those are the three groups of people that I'm like, you know, irreplaceable right now. And if
you were to to all of the all the roles that could be done by AI agents, if we were to replace them with AI agents, do you think you would still have these three roles pools of people to hire and
promote into the three critical things that you need in the long term?
If things carry on at the the current rate of trajectory, yeah, one could assert that even those roles would experience pressure. If you just imagine like people think of things either statically or linearly or
exponentially. Yeah,
exponentially. Yeah, you imagine an exponential rate of improvement, which is kind of what I've seen. Even like a 10% compounding rate
seen. Even like a 10% compounding rate of improvement at some point, at some point, at some point, I think what remains is actually the IRL
irreplaceably human stuff, human to human, our Maslovian needs of being in person like we are now aren't going to change. We need connection. Humans get
change. We need connection. Humans get
very sick when they don't have other human beings in their life and strong, deep relationships. 100% agree. So that
deep relationships. 100% agree. So that
stuff is going to matter a whole lot. I
have this contrarian weird take that actually maybe this is the first technology that's going to deliver on the promise of making us human and connected because we're going to be rendered useless of everything else other than what humans are good at. Cuz
all the other technology said, "Oh, we're going to make you more connected, connecting the world." And they disconnected the world and isolated the world. But maybe this is the one. It's
world. But maybe this is the one. It's
so intelligent now that it doesn't need us to around in spreadsheets anymore.
Do you see that actually happening in real time right now that it's making us more able to be in person, connected with one
another, having deeper social community engagements.
Yes.
Yes.
And I'll give you some data points.
Okay.
Data point number one, the Financial Times released a report on social media usage. And what they saw is 2022 was the
usage. And what they saw is 2022 was the peak and it's plateaued ever since. The
generation that's plateaued the fastest and heading down is the younger generations. The boomers are still off
generations. The boomers are still off to the races, right? So on Facebook and stuff. And then you look at the way Gen
stuff. And then you look at the way Gen Alfa are using social media. They're not
posting as much. They call it uh posting zero. They're scrolling sometimes, but
zero. They're scrolling sometimes, but they're in dark social environments like WhatsApp and Snapchat and iMessage.
They're not like performing to the world. They also value IRL experiences
world. They also value IRL experiences much more than any other generation.
They're like not getting smashed. We're
seeing every brand has a run club.
um I mean runs exploding around the world and we're seeing this real sort of sort of almost like innate realization that like technology let us down at some fundamental level like dating apps let
us down social networking kind of has let us down and we're seeing I think maybe a bifocation of society where a lot of people are going this like I want to go back to what it is to be a human and I I would imagine that in such a
world where intelligence is so sophisticated that we no longer needed to sit at laptops and like I think screen time is going to continue to fall. I think you go into an office,
fall. I think you go into an office, you're not going to see people sat at laptops. You're gonna see something
laptops. You're gonna see something completely different. And I think maybe,
completely different. And I think maybe, you know, and then we talk about robots and Optimus robots. Elon says there'll be 10 billion Optimus robots. Elon has
been wrong with timing before. He's
almost never been wrong on the big things completely. He's just his timing
things completely. He's just his timing is got a bad track record. Um, so I think he's he's probably right. You
know, I think I've I've got some people on the way from Boston Dynamics and these other big companies like Scale AI, and they're actually bringing the robots here to show it, like folding laundry, doing the dishes. I'm not saying that's what I would want in my home, but I think factory work is going to
completely change. I think a lot of
completely change. I think a lot of manual labor is going to completely change, and I think we're going to be forced to do what only we can do. Um,
Sebastian, who's the CEO of Cler, has actually just called me.
Hello, Sebastian. You're right.
Hey, how are you?
I'm good. How are you?
It's been a while.
It has been a while since you're on the show. I was just saying we do need to
show. I was just saying we do need to get you back on.
I I just I just had a couple of simple questions cuz you know I do a lot of interviews and um Clan has always mentioned because I think the media has said that you like double down on AI then you reversed because it didn't work out. So I know I spoke to you a while
out. So I know I spoke to you a while ago and we exchanged a couple of DMs about it but that was more than a it was almost a year ago now.
So I just wanted to get an update on Cler's business AI agents and all of that if possible. First and foremost, we were early on uh released um AI uh to
support our customer service which had that uh initial uh benefit of uh more calls being dealt with by AI which customers liked because those calls or
chat messages were much much faster and more qualitative. Then since then that
more qualitative. Then since then that has actually expanded slightly. Um what
we did however try to communicate as well is that we believed in a world of where AI is cheap and available the value of human interaction will be
regarded as higher. So the future of customer service VIP is a human um we have then hence doubled down on providing more of that but at the same time the efficiency gains within the
company has continued. I mean we used to be about 6,000 people and and now we are less than 3,000 which is 2 3 years since we stopped recruiting and at same point
in time our revenue has doubled right so you can clearly see that AI has allowed us to be do more with less people but we have avoided layoffs and instead relied
on natural attrition when people kind of move on to other jobs. I mean from my perspective we will continue to be very you know not really recruit much. I mean
we recruit a little bit here and there but we expect that kind of natural attrition of 10 15% per year to continue and to become fewer. I think the big
breakthrough was really in November December last year where even the kind of more most skeptical uh engineers who were like very well-renowned and and appreciated like
the founder of Linux and stuff like that basically said that coding has now been resolved and hence is not you know uh you don't need to code anymore and that was kind of a common sentiment. So I
think in in coding that's definitely an engineering work that has been a tremendous shift in the last six months.
What do all these people go do Sebastian?
I am optimistic. I mean I think obviously people will have a lot of opinions about this topic but I still believe that we are going to move towards a richer society. Now in the
short term there could be more worry about what happens if people don't get a job and and so forth. But I think in the longer term, I I am optimistic what it means for society and humanity.
Thank you so much, Seb. I'll chat to you soon. Thank you for taking the time. I
soon. Thank you for taking the time. I
appreciate you, mate. Thanks.
All right. All right. Byebye. Byebye.
You know the little traditional SIM card that goes inside of our phones. They
haven't changed at all since they were invented in the '90s. You have this physical piece of plastic that means you're locked into one carrier, one network, and the second you cross a border, that carrier can start charging
you whatever they want. But there are alternatives and today's sponsor SY is one of them. It's an eSIM app that gives you a safe and secure data connection in over 200 destinations. All of their
ESIMs have built-in cyber security which is great if you're traveling for work and looking at confidential material.
I've been using SY whenever I travel because the connection is always reliable and it saves me a ton of roaming fees. It also means I don't have
roaming fees. It also means I don't have to deal with all of the faf that surrounds sorting out a SIM everywhere I go. If you want to give it a try,
go. If you want to give it a try, download the sale app from the app store now and scan the QR code on screen. And
if you want 15% off your first purchase, use my code D O A when you get to check out. That's D O A for 15% off. Keep that
out. That's D O A for 15% off. Keep that
to yourself. This is something that I've made for you. I've realized that the Dio audience are strivals that we want to accomplish. And one of the things I've learned is that when you
aim at the big big big goal, it can feel incredibly psychologically uncomfortable because it's kind of like being stood at the foot of Mount Everest and looking upwards. The way to accomplish your
upwards. The way to accomplish your goals is by breaking them down into tiny small steps. And we call this in our
small steps. And we call this in our team the 1%. And actually this philosophy is highly responsible for much of our success here. So, what we've done so that you at home can accomplish
any big goal that you have is we've made these 1% diaries and we released these last year and they all sold out. So, I
asked my team over and over again to bring the diaries back, but also to introduce some new colors and to make some minor tweaks to the diary. So, now
we have a better range for you. So, if
you have a big goal in mind and you need a framework and a process and some motivation, then I highly recommend you get one of these diaries before they all sell out once again. And you can get yours at the diary.com.
And if you want the link, the link is in the description below.
Any thoughts? Well, I actually had thoughts on something that you said before he called, which is you were saying that the Jenzers like there's this trend that they're actually disconnecting from
technology. So, they're becoming more in
technology. So, they're becoming more in person. And then there's this other
person. And then there's this other class of workers that are actually leaning into the technology, but then becoming more human because they're leaning into the technology because they're realizing that they
should actually just be spending more time doing inerson interactions rather than staring at a spreadsheet. And so
they're no longer doing the typing, whatever. I really want to go back to
whatever. I really want to go back to this New York Magazine piece that just came out because what you're describing is true for a very specific category of people, which is often like the business owners
and leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time. But what
the piece talks about is the working class like people like people who are not business owners that are then having
to experience being laid off and then working for the data annotation industry which is now one of the top jobs on LinkedIn by the way. Um the yeah so
LinkedIn had a report that showed the top 10 jobs with the highest growth in the last year and data annotation is on that list.
And for anyone that doesn't know what data annotation is.
Yeah. So data annotation is the process of teaching these chat bots or or any AI system to do what they ultimately are able to do. So the fact that chat GBT
can chat is because there were tens of thousands or hundreds of thousands of people that were literally typing into a large language model and showing it.
This is how you're supposed to then respond when a user types in a prompt like this. Before they did that work,
like this. Before they did that work, chatgbt didn't exist. Like it just it would just you would prompt the model and the model would generate some text that was not in dialogue with the
person. It would kind of generate
person. It would kind of generate something that was adjacently related.
Is this what they call reinforcement learning where you kind of you give it like a it's a part of the process of reinforcement learning. So you do data
reinforcement learning. So you do data annotation which is literally um showing lots of different um you know examples of things that you want the model to know and then
reinforcement learning is getting the model to then train on those examples iteratively in a way that then gives the model some of those capabilities. And what the New York
capabilities. And what the New York Magazine piece highlighted is many many of the people that are getting laid off now or or or are struggling to find work. And these are highly educated
work. And these are highly educated people. They're college graduates, PhD
people. They're college graduates, PhD graduates, law degree graduates, doctors, um and again like award-winning directors that are that are then
struggling to find employment in the economy because the economy has been very much restructured by AI. they are
then finding themselves being serving this industry and the industry is designed in a way that is extremely inhumane because what the companies the
companies that use these data annotation services like there's these third party providers that are data annotation firms an open AI a gro um a Google they will
hire these firms to then find the workers to perform the data annotation tasks that they need for these These firms, these third party firms, they are
incentivized to pit workers against each other because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another in this
middle layer to get the the the bid the the contract from the the client. And so
all of these workers that were interviewed for this New York Magazine story talk about how they actually no longer have an ability to be human
because they are waiting at their laptop to be pinged on Slack for when a project is going to open up for data annotation because they've tried job hunting. They
literally can't find anything else. This
is the thing that's going to help them put food on the table for their kids.
And there was this one woman who said like, "I have so much anxiety about when the project is going to come, when it's going to leave that when the project came, it was right when my kid was
coming off of off of school." And I just started tasking furiously because I don't know what's going to go and I need to earn as much money as possible in this window of opportunity. So then my
when my kid came home and tried to talk to me, I screamed at my child for for distracting me. And then she was like,
distracting me. And then she was like, "I've become a monster and I'm not even allowed to go to the bathroom or take care of my kids, let alone myself,
because this industry that is absorbing more and more of the workers that are being laid off, is mechanizing my life,
atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine that all of these AI executives are saying is
then going to come for everyone else's jobs. And so what you were saying about
jobs. And so what you were saying about these this class of workers, the business owners that get to become more human because there are all of
these AI models now doing the tasks that they don't have to do anymore. It is at the cost of the vast majority of people who are not business owners that are
struggling to find work getting absorbed into the work of then providing these technologies that the business owners can use and instead of becoming more human they
feel like their humanity has been squeezed and diminished and they have no ability to have control, agency and dignity in their lives anymore. I think
this is a big I think this is a big question that kind of pertains to this graph here which is you know all of these people if we believe anthropics prediction of who will be disrupted
these people in these industries like arts and media legal um life and social sciences architecture and engineering computer and maths business and finance
and management and also office and admin. These people if we believe this
admin. These people if we believe this would have to retrain at something else and unlike the industrial revolution where you might get 10 20 years to retrain because factories take a long time to build. The distribution layer
that AI sits on top of is the open internet. So this is why chat can go and
internet. So this is why chat can go and get hundreds of millions of users in no time at all and become the fastest growing company of all time. Um one of my fears is that this disruption takes
place at a speed where we can't transition.
And that was you know that I think you you you said that sentence in the passive voice the transition would happen at a speed but who is driving that speed?
Um it's the companies and their race with one another.
Yeah. And so they are driving the transition to happen at a speed at which it would be really hard to take care of all of the people that would be
bulldozed over by this is one of the crazy questions that no one can answer for me when I sit with these people that are AI CEOs. So I go, "So what happens to the people if this is if you agree that this is going to happen at super speed?" You know, I
spoke to that CEO of Uber, Dar, who said very similar things to what you're saying is, you know, there'll be data labeling jobs, for example, for the drivers. But um they can't all become
drivers. But um they can't all become data labelers. And there's a question
data labelers. And there's a question around meaning and purpose and fulfillment. And that comes from losing
fulfillment. And that comes from losing your meaning in life. I s also sit here with so many people who talk about how their father lost their job in Iran or some some other country and came to the
United States and had to be a a toilet cleaner on particular case was a doctor in Iran but came to the US and was a toilet cleaner and had to deal with the sense of shame that that particular person felt and the lack of dignity that
that caused and how that made that person's self-esteem feel and the depression alcoholism that transpired from that. um if this happens at a large
from that. um if this happens at a large scale across society, there's going to be a ton of consequences like that.
I mean, this is this is like the core themes of my work. And the reason why I'm critical of these companies is that they are creating technologies in a way that creates the halves and have nots in
an extreme form that we have. It's it's
exacerbating the inequality that we already see in the world. Like the
people who have things will have way more riches. they'll have way more free
more riches. they'll have way more free time. They'll be allowed to be more
time. They'll be allowed to be more human. But the people who don't have
human. But the people who don't have things are even being squeezed even more. And it's not just from a work
more. And it's not just from a work perspective. I mean, I talk in my book
perspective. I mean, I talk in my book also about the environmental and public health crisis that these companies have created where they are building these
colossal supercomput facilities. there
and and in in comm community like communities all around the world and they specifically pick some of the most vulnerable communities. We're sitting in
vulnerable communities. We're sitting in Texas right now. Open AAI's largest one of its largest data center projects is being built in Abalene, Texas as part of the Stargate initiative which was an
effort announced at the beginning of Trump's second administration to spend $500 billion on AI computing infrastructure.
This facility consumes will when it's finished will consume more than a gigawatt of power which is over 20%
over 20%. So this is actually a little
over 20%. So this is actually a little bit inaccurate now. Um this was something that circulated online for a while but there's updated numbers just for someone that can't see cuz they're listening on Spotify or something. It's a picture of the size of
something. It's a picture of the size of this facility.
So this is not the Abene Texas one. This
is a meta facility. Yeah. So, let's
first talk about opening eyes facility in Texas. That one would be the size of
in Texas. That one would be the size of Central Park and it would run a million computer chips and it would require the
power of more than 20% of New York City.
Do you know one of the things which I found confusing, so I'd like to like alleviate the dissonance is I thought you were saying earlier that you didn't think the job disruption promises were real.
No, what I was saying is that when we talk about what these executives predict about the future, we need to understand that they are ultimately trying to
influence the public in a way that allows them to continue maintaining control over the technology.
But objectively, do you think that the job disruption that they talk about where Yeah. Yeah. I mean I I mentioned
Yeah. Yeah. I mean I I mentioned real well I I don't want to comment specifically on like this chart but it's like we've already seen in job reports that there is a restructuring of the economy
happening right now. Yeah.
But but going back to like the data center. So this supercomputer facility
center. So this supercomputer facility it's a meta supercomputer facility is being built in Louisiana and it would be four times the size of
the Abene Texas one and use half of the average power demand of New York City.
So it's one the size of Manhattan. This
makes it seem like almost all of Manhattan, but it's it would be 1/5 the size of Manhattan. When these facilities go into these communities, what happens?
Power utility increases, grid reliability decreases. The facilities
reliability decreases. The facilities also need fresh water to generate the power for powering them as well as fresh water to cool. And there have been lots of documented stories of communities
that are already really constrained in their freshwater resource. they're under
a drought when a facility comes in and then there are people the community is actually like competing with this facility for fresh water. I talk about one of those communities in my book and
also sometimes these facilities instead of connecting to the grid they instead a a power plant pops up next to it. So in
Memphis Tennessee where Musk built Colossus the supercomputer for training Grock he used 35 methane gas turbines to power the facility. This is a
working-class community, a black and brown community, a rural community that was not even told that they would be the hosts of this facility. And they
discovered it because they literally smelled what seemed like a gas leak in all of their living rooms. And that's when they discovered that these methane
gas turbines were taking away their right to clean air. And this is a community that's already been facing a history of environmental racism. They
had already had lots of struggles to access their right to clean air. And now
there's this huge supercomput that's landed in their midst that is pumping thousands of tons of toxins into their air, exacerbating the asthmatic symptoms
of the children, exacerbating the respiratory illnesses of other people.
that it's it's one of the communities that has the highest rates of um lung cancer and so and that supercomputers taking their jobs and then they also have supercomputers
taking their jobs. So, so this is what I mean is like the halves and have nots are fundamentally being pulled apart even further. Like if
you in this version of Silicon Valley's future are in the misfortunate category of being a have not, we are talking
about you now getting a job that is way worse than what you had because you might be doing data annotation and you might be treated as a machine rather than as a human to extract value
the value of your labor for perpetuating this labor automating machine that these people are building. You might be competing with these facilities for freshwater resources. They're also
freshwater resources. They're also polluting your air. Your bills have increased. So, the affordability crisis
increased. So, the affordability crisis is getting worse.
Like, how is that making people able to be more human?
What do we do about it?
Yes.
Okay. So, one of the analogies that I always use is AI is like the word transportation. Transportation can
transportation. Transportation can literally refer to everything from a bicycle to a rocket. And we have nuanced conversations about transportation where
we always say we need to transition our transportation towards more uh sustainable options. We need a
sustainable options. We need a transition towards you know public transport, electric vehicles. And we
don't we don't ever say everyone should get a rocket to do every to serve all of their transportation needs, right? Like
we're in Austin. If you use a rocket to fly from Dallas to Austin, like that would just make not no sense. It's just
a disproportionate use of resources to get the benefit of getting from point A to point B. This
how we should think about AI. So all of the models that we've been talking about, I like to think of them as the rockets of AI. They use an extraordinary amount of resources and they provide
benefit some dramatic benefit to some people but they're also exacting an extraordinary cost on a large swath of
people because of the like the costs of developing this technology.
Why don't we build more bicycles of AI?
This is things like deep minds alpha fold which is a system that predicts how proteins will fold based on amino acid sequences. It's really important for
sequences. It's really important for accelerating drug discovery for understanding human disease and it won the Nobel Prize in chemistry in 2024.
And the reason why it's a bicycle of AI is because you're using small curated data sets. you're just you just have
data sets. you're just you just have data that has amino acid sequences and protein folding. So that means you need
protein folding. So that means you need significantly less computational resources to develop the system, which means significantly less energy, which means less emissions, so on and so
forth. And you're providing enormous
forth. And you're providing enormous benefit to people.
It feels like the horse has left the stable in this regard because they've already taken people's IP, they've taken media, they they train on this podcast. We know they do because it it shows that they do. Um I think
there's a button actually in the back end of YouTube now that allows you just to click it and it says we will train on your YouTube channel. Um so the horses kind of left.
Here's the thing. If the horse truly had left the stables, they wouldn't have to train on anything anymore. Why is it that their appetite for data has actually expanded? It's because in order
actually expanded? It's because in order to build the next generations of their technologies, in order to have the technologies continue to be relevant and
continue to update with the pace of new knowledge creation and society's evolvement, they need to train again and again and again and again. And why are
they employing actually more and more and more data annotation workers over time? It's because they need more and
time? It's because they need more and more of that work over time. I mean,
I've been reporting on data annotation work for over 7 years now, and it's not gone down. It's gone it's increased.
gone down. It's gone it's increased.
Do you think there's any chance of it going down? Do you think there's any
going down? Do you think there's any chance of this sort of brute force scaling approach where you take data, you take computational power, energy, and you, you know, you have um the data
labelers and, you know, building out more and more parameters for the models.
Do you think there's any chance it's going to stop or go in a different direction other than the one it's going in now?
I would love to reframe the question and say what should we be doing in this moment where it's not going down where we do recognize that actually these companies in this moment need continued
resources, inputs and labor to perpetuate what they are doing.
Yeah. because this sounds like stop and I just feel like stop is like a HUD.
It feels like I just think you know with the government in place they're supporting these companies like crazy.
Globally this is happening. So I'm like stop doesn't feel I always say we need to break up the empire and we need to develop alternatives and we are already seeing a
flourishing of incredible grassroots movements that are applying an enormous amount of pressure to the way that the empire is trying to unfold its agenda.
80% of Americans in the most recent poll think that the AI industry need to be regulated.
Yeah.
When was the last time that 80% of Americans were on the same side of an issue?
No. Yeah. When I have these conversations on the podcast, the comment section are clear.
Yeah.
There's no there's no disagreement.
There's no one in there going, "Oh, no.
I think they should crack on."
Yeah. Dozens dozens of protests against data centers have broken out all around this country and the US, all around the world.
So, what do we do about it?
So, these are thing people that are doing something about it. They are
actually reasserting their agency and exercising democratic contestation against the ways that the empires are going about their business.
What goal should we be aiming at? So, if
I said to my audience, Janet at home, because this is kind of what I see in the comments, it's hopelessness. It's
like, what can I do? I'm just a Yeah. Well, well, well, the goal is not
Yeah. Well, well, well, the goal is not that we completely get rid of this technology. The goal is that these
technology. The goal is that these companies need to stop being empires.
And the way I define like a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair exchange of value with the workers who
work for them or the people who use them or all of the other people that are involved in like the supply chain of producing and deploying these technologies. They can extract and
technologies. They can extract and exploit and extract and exploit and get more value than what they offer. Whereas
typical businesses, there's a fair exchange. you you buy a service, you
exchange. you you buy a service, you feel like you got the same amount of value as the service that you provided.
But like for these data annotation workers, for example, they do not feel in any way that they're being paid the same value that they provide to these companies. So that's like for me the
companies. So that's like for me the north star is like we should be pushing back and holding accountable these companies when they operate in an
imperial way. And that's what we've seen
imperial way. And that's what we've seen with all of these people that are now literally protesting in the streets against data centers and having an enormous effect, by the way, actually
stalling data center projects and also completely banning data centers from being developed in their localities.
We're seeing that with artisan writers that are suing these companies for intellectual property infringement and creating a huge public conversation about what is it that we actually how do
we actually want to protect our intellectual property? It's like I three
intellectual property? It's like I three weeks ago I met Megan Garcia who is the mother of Sul Settzer III who is the
14-year-old who died by suicide after being sexually groomed by a characterized chatbot.
And she when that happened I mean obviously was incredibly devastated by what had happened to her son. She also decided to do something
son. She also decided to do something about it. She sued the companies and
about it. She sued the companies and that lawsuit then sparked many other parents and families who were actually experiencing similar things to sue these
companies as well. That has created an enormous public conversation about what these companies are actually doing when they exploit and they extract. What is
the cost to the lives of people around the world including children? So, what
do you think my audience should do if they if they agree with everything written in your book, Age Empire of AI, Dreams and Nightmares, and Sam Mortman's Open AI? If they agree with everything
Open AI? If they agree with everything said here, if they agree with everything we've discussed today, they're concerned about their kids, they they don't want everyone to become data labelers, they don't think that's a, you know, particularly great solution, what what
can they actually go and do?
When I was writing the book, the only discourse that was happening was this is the best thing since sliced bread.
Mhm. because of all of the actions of these people like saying when they're comp they're they're not happy with the things that these companies are doing.
We now have 80% of Americans that want to regulate this industry. And so I would say to people, think about all of the ways that your life intersects with the resources and the that the AI
industry needs to perpetuate what they do and also the spaces that they would need to deploy these technologies to continue having broad-based adoption
in their work. So you're a data donor to these companies. You could withhold that
these companies. You could withhold that data. And that's what those artists and
data. And that's what those artists and writers are are doing. like they're
suing these companies to withhold to try and create mechanisms by which that data would then be withheld. You probably
have a data center popping up around you. If you're at a school environment
you. If you're at a school environment or a company environment, you're probably having a discussion in those environments right now about what should the AI adoption policy be? And these
companies they like I was talking with some open air employees just the other day and they were telling me that it's understood internally that the revenue
targets for the company are extraordinary and they need things to go flawlessly for it to all work out. And
so they would need every single person to adopt this, every single space to adopt this. They would need to be able
adopt this. They would need to be able to build their data centers at the speed that they're trying to build them. And
so what I would say to everyone of your viewers is let's not make it go flawlessly if we don't agree with what they are doing.
Ah, okay. I got you.
And then let's build alternatives.
Because the thing is what I'm saying is not that these technologies don't have utility.
It's that specifically the political economy that has emerged to support the production of these technologies right now is exacting a lot of harm on people. But
we have research that shows that the very same capabilities could be developed with much more efficient methods with much less resource
consumption. And we have a lot of
consumption. And we have a lot of different other AI systems at our disposal that are like the bicycles of AI that we also know provide extraordinary benefit at very little
cost. So let's break up the empire and
cost. So let's break up the empire and let's forge new paths of AI development that are broadly beneficial to everyone.
It's strange. I'm quite I think I'm I'm I've trained myself to deal with dichotoies in my head. And this for me is such is a dichotomy where I as a CEO and as a founder, as an entrepreneur and
someone that loves technology, I think it's incredible. It's absolutely
it's incredible. It's absolutely incredible AI. It's just so amazing and
incredible AI. It's just so amazing and incredible the things it's enabled me to do and create.
Yeah. Because it's designed to enable people like you.
And my car driving in the morning and being safer. Incredible. Um I think you
being safer. Incredible. Um I think you know the billion odd people that use AI tools or chat or whatever it might be, they'd probably say that it's added value to their life. But and this is the part that people find confusing that you
can and I like I invest in companies that are you know heavily using AI but and the big butt is is it possible to think that is true and also think that there are significant unintended
consequences which technology in the history of technology should have taught us to take a moment to pause to talk about because I think this is absolutely like you can have both of these things in your head
and what I'm saying is that this tension doesn't have to be a tension because we could actually preserve the utility and benefits of these technologies but actually develop and design them in a
different way that doesn't have all of these unintended consequences.
Yes. And I think there needs to be a big social conversation which is why I have so many conversations about AI in the show like there needs to be a big social conse uh conversation about being intentional about the social impact um
the social and environmental impact and that conversation is not being had in the in government. From what I can see, the conversation takes place in the industry and actually trying to pull it out of the industry and and open
people's minds to it is hopefully what we've been doing over the last couple of months with this subject because I think it's actually been it it has been been happening everywhere outside of the industry and for local
governments and state level governments there have been huge conversations about this everywhere. Like I've been on book
this everywhere. Like I've been on book tour, I've been to dozens of cities around the world. People are having these crucial conversations everywhere.
I have not gone to a single city.
Yes. Everywhere. Even here in South by.
Yeah. I haven't gone to a single city where the room is not packed and people are not wrestling with the same exact questions as every other person in every other room that I've been in.
Speaking of packed rooms, I know you've got to go cuz you've got you've got to talk today. So, I'm going to we've got a
talk today. So, I'm going to we've got a last question which is the closing tradition on this podcast. How would
your advice to a friend with a terminal diagnosis differ from what you would do yourself?
That's a great question.
Differ from what you would do yourself?
Oh my god. I have
I I would tell them like enjoy like live life for yourself. Um you
wouldn't do it and take it easy. And yeah, I I I am not taking it easy.
Well, I think it's a good thing you're not taking it easy because you're leading a conversation which is incredibly important. And I think that's
incredibly important. And I think that's the thing. I think the conversation is
the thing. I think the conversation is the important thing. And so, you know, because of algorithms and echo chambers, it's so rare to have a conversation these days, especially a long form one.
I agree.
Like this. So, I think they're so important. And your book is for anyone
important. And your book is for anyone that's curious about I think a lot of people would have learned a lot of stuff today cuz I sit here with and interview AI people all the time and I've learned so much today.
From reading your book and the extensive objective perspective that your book takes, you you're able to unravel all of these stories that we sometimes see in tweets and we don't know if they're true or not because you've gone and met the
people and you've done your research and you're incredibly intelligent person, extremely intelligent person who clearly has humanity's interests as your north star and that shows up in everything you
do and everything you say. So please
continue to fight in the way that you are um because it's an incredibly important one. people like you that are,
important one. people like you that are, I think, galvanizing the world to take the collective action that we're starting to see everywhere.
Yeah.
Empire of AI: Dreams and Nightmares in Sam Alman's Open AI by Karen How. I'll
link it below for anyone that wants to read this book. I highly recommend you do. It's a New York Times bestseller for
do. It's a New York Times bestseller for good reason. Karen, thank you.
good reason. Karen, thank you.
Thank you so much, Stephen.
YouTube have this new crazy algorithm where they know exactly what video you would like to watch next based on AI and all of your viewing behavior. And the
algorithm says that this video is the perfect video for you. It's different
for everybody looking right now.
Loading video analysis...