Sora 2 Backlash Is Brewing | EP 159
By Hard Fork
Summary
## Key takeaways - **Sora Deepfakes Spark Backlash**: OpenAI's Sora is facing backlash after users created deepfakes of historical figures like Martin Luther King Jr. and celebrities like Brian Cranston, prompting OpenAI to update its policies. [02:56], [03:24] - **Amazon's Ambitious Automation Plans**: Amazon plans to replace over half a million jobs with robots, aiming to automate 75% of its warehouse operations within the next decade, while also strategizing on managing public perception. [26:32], [29:54] - **AI Browsers: Novelty or Necessity?**: New AI browsers like OpenAI's Atlas offer integrated AI assistants for tasks like summarization, but agent features for autonomous actions are slow and unreliable, and raise privacy concerns. [49:07], [53:29] - **Prompt Injection Risks in AI Browsers**: AI browsers with agent capabilities are vulnerable to prompt injection attacks, where hidden instructions on web pages could trick the AI into performing malicious actions like unauthorized purchases or data theft. [01:02:18], [01:04:08] - **Amazon's Automation Savings vs. Public Image**: Amazon projects saving only 30 cents per item through warehouse automation, while internally grappling with how to manage the reputational fallout of job displacement and community impact. [34:40], [41:38] - **OpenAI's Rapid, Risky Product Rollouts**: OpenAI's history of rushing products like Sora and its voice mode suggests a pattern of prioritizing rapid deployment over thorough policy consideration, potentially damaging public trust in AI. [13:35], [14:18]
Topics Covered
- OpenAI's 'Iterative Deployment' is a recipe for backlash.
- Why corporations no longer fear public shame over AI.
- Tech companies exploit regulatory gaps for market dominance.
- Amazon plans to replace 500,000 jobs with robots.
- AI browsers offer limited utility but pose serious risks.
Full Transcript
There's a sort of this whole genre of
like what I call bad beginnings which is
like when you start saying something and
you realize like oh this is not going
well for me. It's like things that would
fall into this category include per my
recent conversations with the estate of
Martin Luther King Jr.
>> This is obviously a very sensitive
subject. So they talk about debating
ways to manage this like should we not
talk about robots? Should we talk about
a cobot, which is a, you know,
collaborative robot?
>> And have you had it try any of these
agent mode tasks?
>> I have. And I want to say that I do
think that company's imaginations are so
limited here. Like you would truly think
that the only two things that people do
in a browser according to Silicon Valley
are booking vacations and buying
groceries.
>> Yes,
[Music]
this was crazy. Google's Willow quantum
chip is using a new quantum echo
algorithm that ran computations 13,000
times faster than supercomputers. Kevin,
oh, I see it's performance review season
over there in Google quantum computing.
>> Oh, you know, my Echo chip did a quantum
compute.
>> You know, I need a raise.
>> No matter how many times I learn what
quantum computing is, I do immediately
forget it the next day. And it just like
this is how I am. This is why I'm so why
I love reading mysteries so much is cuz
I forget who did it like the day after I
put the book down. That's what quantum
computing is for me.
>> You know, we have to fill out our
performance reviews soon at the New York
Times. And I think I'm just going to put
in there that I um I solved a quantum
computing problem this year cuz how will
they fact check me
>> now? Why don't they email me help asking
to like help on your performance review?
>> Oh, you want to do a 360 review?
>> I want to do a 360.
>> You've got some feedback.
>> Yeah.
>> I'm Kevin Roose, a tech columnist at the
New York Times. I'm Casey Non from
Platformer and this is hard for this
week. Open AAI's big sloppy mess. Why
the company is backpedaling over Sora.
Then the Times Karen Weiss joins us to
discuss her scoop on Amazon's plans to
reduce its hiring needs by hundreds of
thousands of workers. And finally, AI
browsers are here. Our first impressions
of Chat GPT Atlas. Well, Casey, it's
been another busy week for the Open AI
Research and Deployment Corporation. I
learned that's what they call
themselves.
>> Really?
>> Yeah. They they have these hoodies. I
saw a guy on the train the other day
with a research and deployment company.
It didn't even say OpenAI, but that's
sort of their new tagline.
>> Interesting. Well, I would say based on
the events of the past week, Kevin,
maybe OpenAI should do a little more
research and a little less deployment.
>> Yeah. So, let's talk about it. We're
going to talk about two OpenAI stories
this week. One about their new browser.
We'll talk about a little later. Um, but
first, we got to talk about what's been
happening with Sora. Uh we've talked
about this the last couple weeks on the
show, but this continues to be a total
mess for OpenAI. This app and the
various sort of controversies and
backlashes swirling around it. So Casey,
what is going on with Sora? What is the
latest here?
>> Well, I would say there have been two
big developments over the past week,
Kevin. One, the company has said that it
is going to essentially like crack down
on political deep fakes based on
historical figures after the families of
some deceased political figures started
to complain. And then the company has
also said it's going to try to build
some guard rails around the use of
copyrighted intellectual property after
many people in Hollywood freaked out,
including Breaking Bad star Brian
Cranston.
>> Yes. Yes. They managed to beef with
Brian Cranston and the estate of Martin
Luther King Jr. in one week. And Casey,
I think that qualifies as a bad week at
the office.
>> It's not a great week. It's like, you
know, there's a sort of this whole genre
of like what I call bad beginnings,
which is like when you start saying
something and you realize like, oh, this
is not going well for me. It's like
things that would fall into this
category include, per my recent
conversations with the estate of Martin
Luther King Jr.,
Also in this category, um, regarding the
Nazi tattoo I got while in the Marines
and regarding the amount of lead in my
protein shakes, you know, when you said
any of those things, it's not it's not
been a good week.
>> Not been a good week.
>> So, let's start with Martin Luther King
Jr. and his estate and their beef with
>> Sora. Why is he so significant a figure
in American history? Well, according to
my Sora feed, he's a a sort of
historical civil rights icon who liked
to get up and give speeches about Skiiby
Ohio toilet riz. He also appears to love
to play Fortnite based on the sort of
videos I've seen.
>> I have a dream. I have a dream that one
day I will stand on the hilltop and drop
right into a game of Fortnite.
Lord, that's going to be fun. So, what
we're talking about is this sort of
emerging genre of Sora videos, which I
started seeing pretty soon after
downloading the app, where people would
just take Martin Luther King's iconic
speeches, uh, such as I have a dream,
and make him say other things, things
talking about Gen Z trends, things
talking about video games, um, endorsing
various products. Um, this was funny to
some people, offensive to others. Uh,
you know, who didn't like it was the
estate of Martin Luther King Jr.
>> Yeah. And it it wasn't all, you know,
playing Fortnite and and talking
Skibbity toilet. Some people were also
having MLK make monkey noises and and
putting him in other just like overtly
racist situations. And so, yeah, his
family members complained. The
Washington Post wrote a great story
about families of him and other deceased
historical figures saying, "Hey, like
this this really sucks." And OpenAI's
original position had been, "We believe
in free expression. people should be
able to do what they want. But I don't
know, at some point something changed
and next thing you know, OpenAI is on X
posting a statement saying while there
are strong free speech interests in
depicting historical figures, OpenAI
believes public figures and their
families should ultimately have control
over how their likeness is used, which
was a brand new policy as of the moment
that they posted that. And and this is
somewhat uh confusing to me because part
of the way that Sora works is that in
order to make a cameo of someone to use
their face in a video, they have to sort
of give you permission to do that. So
presumably, you know, Martin Luther King
Jr.'s estate did not go into their Sora
settings and say anyone can make a photo
of me. But they they sort of use some
public figure loophole or how did that
work?
>> That's right. So if you're just an
average person, people cannot go in like
you Kevin, like a very average person. I
can't just go in and make a cameo of you
unless you have changed your settings
that way. But OpenAI basically said it
is open season for historical figures.
And of course there's lots of video out
there of MLK and others and they just
said yeah go crazy if you want. Got it.
So now they are saying actually we've
thought about it and after consulting
with the King estate we are no longer
letting people do this. Yeah.
>> Like what happens if you try to make a
video with Martin Luther King Jr. now?
>> Now it will just you'll just get
blocked. You know it violates the
content policies. But, you know, I just
want to say it was so obvious that
people were going to do this, you know,
and in its ex post, OpenAI suggested
that the reason that it had made this
change was that people were making quote
disrespectful videos of MLK. Like, you
really thought that people were only
going to make respectful videos of
historical figures? Like, let me be
clear. The only reason to use Sora is to
create a video of someone doing
something that they would not ordinarily
be doing, right? It is not a technology
to make uh people give beautiful
speeches about civil rights.
>> I confess I am somewhat uh implicated in
this because I have not made a Martin
Luther King Jr. Sora video, but I did
make a video of Mr. Rogers um saying Gen
Z catchphrases um cuz I thought it was
funny
>> after everything Fred Rogers did for
this country and this is how you repay
him?
>> I felt bad about it if it makes you
makes it any better. I did have a moment
of like, you know, guilt and and sort of
shame after doing it. Um, did I do it
anyway? Yes. Did it get approximately
four likes? Also, yes.
>> But I mean, look, this I'm here's what
I'm telling you. This is what the
technology is for. It is doing exactly
this thing. And so, if you don't have in
your mind a policy for for how you want
to handle that before you launch it, I
think you're doing something
irresponsible.
>> Okay. So, the estate of Martin Luther
King Jr. is mad at OpenAI over Sora. Who
else is mad at Open AI over Sora? Well,
Kevin, that brings us to Brian Cranston,
who presumably was, you know, minding
his own business, uh, down in
Albuquerque making methamphetamines,
when all of a sudden he opens up the
Sora app and finds himself in videos
with Michael Jackson and Ronald
McDonald, which is what we like to call
around here a nightmare blunt rotation.
>> Hey everybody, it's Michael here and I
am so happy today I got somebody really
special to introduce to you. Check it
out. This is my new friend.
>> Hi, I'm Walter. Pleasure to meet all of
you. Michael's been showing me around
and uh it's been a good day.
>> I actually haven't seen them myself
because I don't want to support Ronald
McDonald that way. I think he has a lot
to answer for.
>> Yeah.
>> So, here's why this is a problem. This
was supposed to be an optin regime. If
if celebrities images were going to
appear in Sora, it was supposed to be
that they had to opt in. But as Winston
Cho reported in the Hollywood Reporter
last week, that's actually not what
happened. Days before the release of
Sora, OpenAI went to the big talent
agencies and the studios and said, "Hey,
if you don't want all of your
intellectual property in our app, you
have to opt out." Which companies like
Disney were putting out statements being
like, "That's not actually how copyright
works. Like, you don't have cart blanch
to do whatever you want with our IP
unless we opt out." And so this starts
to get people in Hollywood really mad.
Got it. So I saw the statement from
Brian Cranston um and SAG ARA which is
the union that represents actors um and
a number of other talent agencies
basically saying hey we don't like this
but what what are they saying about how
open AAI has been approaching that
because OpenAI from my understanding did
actually try to sort of go to Hollywood
before this app came out and say hey
just FYI we are going to be releasing
this um but we have like taken steps to
sort of get ahead of some of or the
issues we think you might have with it.
>> That That's right. But in practice, this
just was not true. People were able to
create videos of Pokemon and Star Wars
and Rick and Morty and other
intellectual properties whose owners had
never given their permission. Brian
Cranston had not given permission for
his likeness to be used in the app. And
so you wind up having a lot of what
OpenAI I always love these euphemisms
that these companies use. OpenAI calls
these unwanted generations. I was like
like generation Z. But it turns out that
no, this is about unwanted videos
appearing within the Sora feed. So, you
know, it's very funny to me to come out
afterwards and saying on reflection,
we'd like to strengthen the guardrails
when in fact there were not guard rail.
You know what I mean?
>> Yes. If I drive my car off the side of
the road because there's no guardrail
and the California Department of
Transportation says, "We're going to
strengthen these guardrails." I'm
saying, "Where was the guard rail?"
>> I I'm dead and I'm shouting at you from
hell saying, "Where was the guardrail,
Kevin?"
>> Right. There is a lot of like I don't
know like false negative or like just
people being like figning surprise like
I cannot believe that my unauthorized
generation app is causing problems
>> over unauthorized generations.
>> It's crazy. It's I I am so glad you said
that because this is the thing that has
got me so exercised over the past week.
It is that phony naivee. Is this wow who
could have ever predicted this? Because
that is just an approach that I think if
you apply it to building AI products in
the future is going to take us to some
very bad places.
>> Okay, so OpenAI is dealing with this
backlash. I think there's sort of a
larger backlash brewing over just AI
generated video and I I'm curious what
you make of this
>> that like I think there is starting to
become a consensus position especially
among people who are like not in San
Francisco and do not work in the AI
industry that like all of this is just
like bad and stupid and harmful and that
the sort of juice is not worth the
squeeze as it were. that like the
benefits of AI, whatever they might be
in the future, are not enough to justify
the enormous cost of training these
models. There's something sort of
soulless and uh and depressing about
people using AI to generate fake videos
of Martin Luther King Jr. and Brian
Cranston and Ronald McDonald uh doing
various things. I guess I'm curious
whether you think the Sora backlash is
part of that or whether what we are just
seeing is one manifestation of of a
pre-existing thing where people were
already mad about this stuff. I I mean
we are going to have to get survey data
to develop an empirical answer to that
question. But we know from like a recent
Pew survey that already about half of
Americans say that they are more
concerned than excited about the future
of AI. And my assumption is that the
Sora backlash is going to fuel that.
When I just look at my own interaction
with friends and families, the default
feeling about Sora is not what a fun new
creative tool. It is this is bad and I
hate it. And and by the way, these
aren't even necessarily people who are
like up in arms about what's going on
with MLK and Brian Cranston. This is
just sort of giving them the ick.
>> Yeah. I mean, to me, this just seems
like a continuation of this pattern at
OpenAI that extends back to the launch
of Advanced Voice Mode last year when
you can probably remember Scarlett
Johansson kind of objected to uh
references to the movie. Her OpenAI had
basically approached Scarlett Johansson
and said, "Hey, would you like to be
supportive of or involved with this
launch? Could we sort of explicitly tie
this to your character in the movie
Her?" She said, "No." They went ahead
and did it anyway. And it seems like
that is um something that they have
continued to rather than sort of being
chastised by that and learning from that
experience and saying hey maybe it's
important that we have like the
permission uh of the creators in
Hollywood before we go out and do
something that's potentially disruptive
to them. U maybe we should get their
permission. It seems like they have not
learned that lesson.
>> That's right. And and that's really
where I kind of want to land this. This
is why I think all of this matters is I
think that in building any kind of novel
technology, inevitably companies are
going to make mistakes. They're going to
go too far in some regard. There's going
to be some problem that they didn't
anticipate and it's bad and we should
talk about it. But I think companies can
kind of come back from that. But then
there are other companies that just kind
of start to make the same mistake over
and over again. Right. You bring up the
the Scarlett Johansson issue which I
think partly came out of just a rush to
release this voice mode into the general
public and look at what else we have
seen over the past year. I think there
was a similar rush to update GPT40
with what turned out to be a very
sickopantic update that was embarrassing
to the company. There was a rush to
release chat GPT in ways that sort of
cut users access off to tools that they
had become very dependent on and it
triggered this this huge backlash and
now here they are in this rush to
release this video app in part because
they want to make money the company has
said and lo and behold they either have
not thought through the policy
implications or they've just decided to
build a policy that could only possibly
bring them a huge backlash. So I look at
that Kevin and I fear this company
actually has changed a lot over the past
couple of years right how do you think
it's changed
>> well if you look at them before the
launch of chat GBT and even in the few
months after that this was a company
that was talking a lot about on one hand
wanting to introduce new technologies to
the public to see how u society would
adapt and to do that in like a way that
was too aggressive for some people but I
think was basically working out okay in
the original chat GBT Sam Alman was
going around Washington meeting with
senators saying, "Hey, we're building
something that could be really
dangerous. We want guardrails around
this. We want you to pass regulations
that rein us in." And then you just fast
forward to today and it's this allout
war between a handful of companies that
are trying to build AGI faster than the
other guy. And we are just seeing in
real time not just like guardrails being
removed, we are seeing guardrails not
being built and the company having to
come in afterwards and say, "Oh, hey,
uh, sorry about that. Yeah, we're we're
going to do something. We're we're
hearing your feedback. And the thing
that just shocks me about that is I
actually believe for a time that Sam
Alman had taken the lessons of Facebook
and the social media backlash. He had
seen everything that had happened to
Mark Zuckerberg. He said to himself, I
am not going to make those same mistakes
and now we are just seeing OpenAI do the
full Facebook when it comes to content
policy. Well, and I would say two things
about this strategy of open AIS. One is
it is brash. It is risky. it is likely
to lead to lots of backlash and people
being mad at them. And I think it is
potentially correct. I mean, what we've
seen over the past few years is that um
there are not a lot of real restraints
on companies that want to build and
release technology this way. I think the
the real risk to open AI is that people
just end up losing faith in AI as a
whole. Um, and as we've talked about
recently on this show, like the entire
economy now kind of rests on this belief
that AI is growing more powerful, that
it will soon deliver all of these like
tangible economic and social and
scientific benefits to people, that it
is not just like hoovering up a bunch of
people's data and using it to make slop.
And if that's what sort of the public
image of this stuff becomes because
OpenAI has adopted this product
strategy, I think that will be bad for
the whole AI industry, but probably not
especially bad for Open AI. Yeah. I
mean, I think like that is a a fairly
cynical view, Kevin. Like it's true in a
lot of ways. We are in the LOL matters
era of content moderation. Um, and I am
just reflecting once again on how like
we used to have a world of like business
and politics where people would go to
great lengths to avoid feeling shame.
And at some point in let's say the past
decade, we just decided we're not going
to care about that anymore. And no one
can make us feel ashamed for any reason.
And for the moment, I guess the only
real impact we're seeing here is that,
you know, a few copyright holders and
like families of historical figures are
annoyed by videos that they're seeing
online. But this is the company that
continues to build ever more powerful
technology. And when GPT7 comes out and
is helping noviceses build novel
bioweapons, I don't want there to be an
expost saying that based on recent
pandemics, the company has decided to
build some guard rails.
>> Right. Right. I mean, I've been talking
with a few people sort of in and around
OpenAI about this over the past few
weeks, just kind of taking the
temperature of like how folks over there
are feeling about this. And a couple
things that I've heard that I want to
just run by you for a reaction. One is
like this is a company that uh does not
have the benefits of having hundreds of
billions of dollars a year in search
revenue uh flooding in the door that it
can use to build AI stuff. Like that is
the situation that Google its sort of
next biggest competitor is uh in. They
basically don't have to care about
money. they can spend all of their, you
know, profits on curing cancer and
building quantum computers and
self-driving cars and whatnot, but like
OpenAI kind of doesn't have that luxury.
And so they have to figure out ways to
pay for their enormous ambitions. And
not all of those are going to be sort of
obviously pro-social and beneficial
things, but the ends will justify the
means. Just as you know, Google spent,
you know, many years, they say, uh,
building up this uh, monopoly and doing
all sorts of unsavory things in order to
get the profits that they can then plow
back into the sort of peace dividend of
AI research. I mean, I reject that for
two reasons. One, this company's stated
mission is to build AI that benefits all
humanity. So, it's like if if the
argument is in order to benefit all
humanity, we have to harm some of
humanity. Get a new mission statement,
girl. Like, come on. Um, number two, I
also reject the the premise that they
have some cash crunch. Sam Alman is the
greatest fundraiser in the history of
Silicon Valley. This company has access
to all the capital it needs. So, don't
tell me that you need to release the
infinite slot machine that makes Brian
Cranston cry in order to, you know,
build your machine god.
I don't think Brian Cranston is actually
crying unless that sort of video I saw
was uh was legit. But another thing that
I will hear from people at OpenAI is
about what they call iterative
deployment, which is one of their
favorite catchphrases over there. They
basically believe that instead of
keeping all of this research and all
these capabilities cooped up inside the
lab and then kind of releasing them all
at once every few years, that we should
have kind of a steady drip of new
capabilities from these companies that
sort of help the public update about
what is now possible with AI. And so one
defense of Sora that I've heard from
people over there is they'll say look
this technology exists. These video
models are getting quite good and we
could either sort of spring this on you
all when it is impossible to tell the
difference between fake and real and you
know without any of these safeguards or
we could kind of release it in this
iterative way where we kind of give the
world a chance to adjust and catch up
and have these conversations and
arguments about likenesses and copyright
and sort of prepare the world for this
new capability that exists and that that
is the responsible thing to do. What do
you make of that? Well, I just think
that there are so many more responsible
ways to do it than saying there is now
an app where anyone can go on and make a
video of Martin Luther King barbecuing
Pikachu, right? You could just make
whatever deep fakes you want and put
them on a website and say, "Hey, look at
the terrifyingly real deep fakes we were
able to make with this technology. We're
not going to release it to the public,
but just so you know, if you start
seeing videos out there that seem like
maybe they didn't happen, maybe they
didn't." Or you could say, we're going
to just make this available in our API
and so developers have access to it, but
we're going to closely monitor how
developers are using it and if there are
bad actors in our development ecosystem,
we are going to get rid of them. Right?
So those would be two alternatives to
just saying, "Hey everybody, go freaking
nuts."
>> Right. Yeah. I think those are both good
responses. I don't find any of the
defenses of Sora from the OpenAI folks
I've talked to all that compelling. Um
but I think they are learning a lesson
actually from the social media companies
which is you know you do something bold
and brash with very few guard rails
people get mad at you about it and you
scale it back 10%. But you've still kind
of taken that yardage even if you have
to uh turn the dials to uh install some
guardrails after the fact like you still
have kind of gotten what you came for
even if you end up having to make some
compromises.
>> Yes. And in that, when I look at this
story, Kevin, I just see exactly what
YouTube did in its early days, right?
YouTube also started out by saying,
"Hey, um, why don't you just upload
whatever you want in onto our website,
and we're just going to sort of take for
granted that you have the copyright over
whatever you're uploading." And
eventually Viacom comes along and says,
"There are more than a 100,000 clips of
our TV shows and movies all over your
network, and we're going to sue you for
a billion dollars." And this wound up
being a kind of costly legal battle. It
went on for a very long time. It was
eventually settled. But during the time
it took for that case to settle, YouTube
became the biggest video site in the
world and it won the whole game. And so
I think that there is a very cynical
rationale for everything that we're
seeing Open Eye do, which is saying,
"Hey, we have the opportunity to go get
all that market share. We're going to do
it."
>> Yeah. It's what they call regulatory
arbitrage right?
>> Uh that's one thing you could call it.
>> What else would you call it?
>> Well, this is a family program, Kevin.
I'm going to try to be polite. Okay, so
that is the next turn of the screw in
the Sora story. Uh Casey, what are you
looking at with this story going
forward? Here is what I'm looking at
going forward.
Open AAI for better and for worse is a
company that is shipping a lot of
products. We're going to talk about
another one of them later in this show,
right? Uh this this is an organization
that has figured out how to build and
release new stuff. And that stuff does
some really cool stuff. And as with Sora
does some pretty gross stuff. I think
the thing to keep your eye on as these
new products come out is is this company
truly paying attention to responsibility
anymore or is the entire ethos of the
company now just a land grab for as many
users across as many surfaces as it can
get because if that is going to be the
new mo for this company then I think we
need to be a lot more worried about it
than at least I personally have been to
date. Yeah, I think they are in a real
like throwing spaghetti against the wall
phase here. Um, and I think that is
reflected in just how many things
they're shipping constantly and
seemingly a new product or two every
week and some of it'll work and most of
it probably won't. But, you know, Casey,
one of the best pieces of advice I ever
got about journalism was that the
stories you don't write are as
important, if not more important, than
the stories you write. Mhm.
>> And I think Open AI has not learned how
to say no to a new idea or a product or
a business line yet. And I think that's
a skill that they should start
developing cuz it seems like they are
spreading their bets quite thin. They
are throwing a lot of spaghetti at the
wall and maybe they're losing the plot a
little bit.
>> Now, is that advice why you write so few
stories?
>> Yeah. Yes.
>> Okay. Interesting.
>> Yes.
>> That editor
>> I'm very proud of the stories I don't
write though.
>> That editor really did a number on you.
>> Yeah.
When we come back, how Amazon is
planning to automate more than half a
million jobs using robots.
[Music]
Well, Kevin, there's a new story out
there about robatos, but some people are
not saying Domo Erigato. That's right.
Of course, I'm talking about Karen
Weiss's story this week in the New York
Times saying that Amazon plans to
eliminate a bunch of jobs using robots.
>> Yes, this was a big story this week, and
I I'm very excited to have Karen on to
talk about it. The basic idea here is
that Amazon has made plans, secret
plans, plans that has not shared with
the public to replace more than half a
million jobs with robots. And Karen, my
lovely colleague at the times, got a
hold of some of these internal strategy
documents in which they are laying out
these plans. And this story has been
causing a big stir. I think people are
sort of fearful of job loss from AI and
automation right now. Uh that's been
obviously a big topic in the news and
what we're seeing now is one of
America's largest employers saying in
its internal documents, "Yeah, we're
doing it." That's right. You know, it's
one thing over the past couple years to
have discussed, as we often have on this
podcast, the risk that this technology
will someday be good enough that a lot
of people will be put out of work. It is
something very different to see
America's second largest private
employer saying, "We have an actual plan
to make this happen." And it could
affect hundreds of thousands of people.
>> Yeah. So to talk about this story and
how Amazon is racing toward its goal of
full automation, uh we are inviting back
New York Times reporter and friend of
the pod Karen Weiss. She's been covering
Amazon for nearly a decade for the Times
and recently visited a warehouse in
Shreveport, Louisiana where they are
putting a bunch of their new robotics to
the test and I think it's a prime day to
talk with her. Oh, I get what you did
there.
>> Yeah. And unlike an Amazon package, she
doesn't take a day to arrive. No, Karen
always delivers. Karen Weiss, welcome
back to Heart Fork.
>> Happy to join you guys.
>> So, this was a fascinating story. I
really enjoyed it and learned a lot from
it. It caused a big stir. I heard lots
of people talking about this uh plan
that Amazon has to replace a bunch of
jobs using robots. And I want to start
with how you decided to look into this
because this is a subject that people
have been talking about for many years.
Amazon obviously has uh been putting
robots in its warehouses for a long
time. Um we Casey and I went to an
Amazon warehouse last year and saw what
looked to be like a huge fleet of robots
sort of moving around um picking up
containers and bringing them to people
um who would pick things off them and
and put them in boxes. But what made you
think that this was taking a step
forward that was important for you to
write about?
>> Yeah, I've covered the company since
2018 and it's more than tripled its
headcount since then. So there was this
period of just tremendous growth and
then it started plateauing or almost
plateauing and you could see every
quarter when I cover earnings you would
see what was this huge growth
particularly obviously in the early days
of the pandemic you started seeing it
kind of slow a lot and and the company
itself has been talking a lot about its
innovation the advancements it's making
in robotics you know they use the term
efficiency to talk about it they don't
like talking about the job side of it
But it it's just one of those trends
that was kind of like out there waiting
to be dug into and I finally had time to
look into it basically.
>> And tell us a little bit about the
document you obtained and some of the
more surprising uh plans that Amazon
announced in it.
>> Yeah, I mean there was kind of a mix of
documents that I was looking at and some
were more concrete. kind of the core of
it is a important strategy document from
the group that does automation and
robotics for the company that really
lays out um what their plans are. So
there's a chunk that's really looking at
the way they try are trying to manage
their headcount. They talk about things
like bending the hiring curve and have
been growing so much and their goal is
to keep it flat. Their kind of stretch
goal is to keep it flat over the next
decade. Um even as they expect to sell
twice as many items. uh they they have
this kind of ultimate goal of automating
75% of the network. I think of that as
kind of the big picture long-term goal
versus like that's going to happen
tomorrow. All of this is kind of slow
kind of stepbystep changes that add up
together. The other documents are these
really interesting ways in which the
company internally is looking at how to
navigate this publicly with employees,
with the communities they work in. This
is obviously a very sensitive subject.
So they talk about debating ways to
manage this like should we not talk
about robots? Should we talk about a
cobot which is a you know collaborative
robot. Um they talk about should they
deepen their connection to community
groups doing more things like toys for
tots or community parades. Um
particularly in places where they're
going to retrofit facilities. So they're
going to take a normal building that
might employ x number of people and then
um convert it to a more advanced one.
They'll need fewer people in many of
Basically, they're thinking through like
how do we manage the like reputational
fallout if we become known as a company
that is replacing a bunch of jobs with
robots.
>> Yeah. And the plan is you won't have a
job anymore, but your kid will get a
free toy at Christmas. So hopefully that
makes up for that.
>> Yeah. So let's talk about the first
group of documents here. Um and some of
these numbers that Amazon has attached
to this. So I I a few numbers from your
story stuck out to me. One is that
Amazon projects that they can eventually
replace 75% of their operations in these
warehouses with robots. Um, what
percentage of this stuff is already
automated today? Cuz when Casey and I
went, it looked like there were a lot of
people there. There were a lot of robots
and the people were essentially acting
as robots, right? They were like taking
instructions from machines and putting
things, you know, thing A into box A and
like doing that as fast as possible.
>> Yeah. not not a lot of creative
expression in the Amazon warehouse we
were at.
>> But like what what what amount of
robotics growth would it take to get
from where they are currently to 75% of
their operations?
>> Sure. So like this warehouse in
Shreveport, Louisiana that I visited is
considered their most advanced one and
that they say has about 25% efficiency.
So to get to and their their goal is to
quickly get that to 50 in that in that
facility. um to get to something like
75%. It's both not only these individual
buildings, but having to expand it
throughout different types of facilities
that that they operate as well. In the
facility you went to, there's these kind
of cubbies that keep products and they
over time develop this light that it's a
big tower of a bunch of cubbies and so
the light shines on the exact cubby that
has the item you want. So instead of
looking through the cubbies, you kind of
know exactly which one to put your hand
in. So there's things like that that
make it a lot more efficient. um uh in
kind of all different types of jobs.
There's many different types of jobs in
these buildings, but some things are
harder for robots to do. And um one of
the things that interested me in
Louisiana was there's a a job called
decant. And it's essentially they get
these boxes of products in from
marketplace sellers. So the companies
that sell products on Amazon, and they
have to input them into the system. And
so it's essentially a point where you
get like the chaos of the normal world
that they have to kind of standardize
and watching a decant station. We watch
this woman working at it. It is just
random what goes into this thing. So we
saw, you know, um gardening shovels
wrapped in bubble wrap, boxes of
Starbucks curig, um cups, um circular
saws. I mean, each one is different and
they're coming in in different shapes,
different boxes. And so that's still
hard for a robot to look at to look at
to kind of say is this product what we
expected it to be from the shipment. Is
it damaged in any ways? If it's damaged,
it goes into a separate box and someone
has to deal with that. So there's still
a lot of human judgment. Once it's but
once they put it into this box and can
go out into the system, then it starts
becoming more kind of logged into the
Amazon way and a able to manage as
they've developed the technology within
their own in their own spaces. Le let me
ask about this uh this 600,000 worker
figure that's in your story, which is
really the thing that got my attention.
I could not think of another company
that had announced plans to eliminate
hundreds of thousands of jobs through
automation within just a few years in
such a plausible way. Ha had you do we
think this might be sort of one of the
first major signs of significant job
loss due to automation in the US
economy? You know, I spoke with um a
Nobel winning economist for this. He
studied automation and he Exactly. I
know. And he was saying that one last
year and he was saying that the kind of
the real precedent for this is actually
in China in manufacturing in China, but
that within the US, yes, this is kind of
the the kind of bleeding edge of it all.
>> Yeah. So, there's obviously labor and
cost savings reasons why Amazon wants to
make this big push into automation now.
But I'm curious, Karen, if any of this
is driven by like recent advances in the
technology itself, like have the robots
just gotten better over the last year or
two. Um, do we think that that's part of
what is making them put out this
ambitious plan? And uh, talk about how
they want to start opening these
facilities.
>> Actually, some of this they about a year
ago acquired Coariant, which was a
leading or excuse me, not acquired,
licensed for hire agreement as these
kind of new fangled things are. So they
hired the team behind Covariant which
was a leading AI um robotic startup. A
lot of what I reported on actually
predates that being integrated into the
system. So there actually are tons of
advancements that are happening in
computer vision in creating the
environment and the data needed to tell
the robot what to do essentially. But I
think we can expect more in the future
from from what I reported because of
coariance. So, for example, one of the
things that they've um helped improve is
how the robots stack boxes. So, like I
saw that there's a a robotic hand called
a sparrow, and it's suction cups things,
and they use it to consolidate inventory
currently. So, they'll take, you know, a
bottle of hand soap from here and move
it to there, and then they free up extra
space to put new items into the storage
facilities. And the robot stacks them
like really nicely, like they're like
kind of perfect. They don't just like
drop it in the bin. It's like lined up
one by one and they're stood up and I
notice these boxes stood up and that's
important because then it's easier to
grab later. Those are the types of
advancements that they've already
started seeing from this next generation
of AI. So, I think I would anticipate
seeing more of that.
>> Is it true that they also have
technology that uses air to blow open
envelopes?
>> Very sophisticated for technology. Yes,
as a fan. That's what I kind of love
about this. It's it's like simple things
also. It's not all crazy and elaborate.
>> Well, that it was really sad for me cuz
that's actually my dream job, but looks
like the robots are going have to take
this one.
>> Instead, you just blow hot air in the
podcast.
>> That's right.
>> Now I have to pop cuz I can't blow in
the envelopes anymore.
>> No. So, I went to Coarian's lab. They
had a before they were sort of aqua
hired by Amazon. They had a a warehouse
in in the East Bay here. And um I went
to to visit them a while ago and they
were sort of doing uh sort of these more
advanced types of warehouse robotics
where like they would put a large
language model into one of these robots
and like use that to sort of orchestrate
the robot. And so that made it they they
said, you know, made it possible to do
things that like a a sort of simple more
more sort of rule-based robot couldn't
do. Like you could tell it like move all
the red shirts uh from this box into
this box and it could kind of do stuff
like that. So you're saying Karen that
that technology has not yet arrived in
these Amazon facilities even though
Amazon now uh sort of has has licensed
this technology.
>> It has begun to. So they had some of
that for sure like absolutely they had
that and they they they've talked about
using that type of technology to um
there are these little robots that kind
of are like little shuttles. they're
kind of small um like a size of a stool
or something and they just move
individual packages around to sort them
and they've been able to move those more
efficiently because of it for example.
So like just that let them orchestrate
each other better to not bump into each
other essentially. So yeah there is some
of it for sure and I think you'll see
more of it.
>> I'm curious Karen you said you write
that the these documents you got a hold
of show that Amazon's ultimate goal is
to automate 75% of its operations.
What's the remaining 25%? What are the
jobs inside these facilities that they
do not see being automated at least
anytime soon?
>> Well, there will be this growing number
of people that are technicians. So,
essentially working with the robots
themselves and this is fix the robots
tend to them. Exactly. And those are um
something they they talk a lot about. It
is both a concern to that they have
enough people doing those jobs and
there's not enough people trained in
that right now. So, they need a labor
force for that. Um they make more money.
They are like better jobs in many ways.
[Music]
career path.
There's also just watching the robots
and it'll fall I saw them try to grab
this like um shrink rack bag of I think
it was like t-shirts or something or
underwear and it was just like the
suction of it trying to pick it up and
eventually it fell and it kind of fell
half on the robot, half on the side and
so it stopped and then someone would
have to come and move it or there's just
like something just isn't applied
correctly and someone needs to tend to
it. So there there are still roles like
that that I think will be almost
impossible to get rid of over time. I
mean there's there's um yeah
>> it's like it's the classic thing of
things that are um easy for robots are
hard for humans and vice versa, right?
So it's like pretty easy for a human to
like grab something that the robot can't
pick up. I was struck by one other
number from your story, Karen, which is
that Amazon in these documents says that
it thinks automation of its warehouses
would save about 30 cents on each item.
Um, that actually seemed quite low to
me. Like if that's Yes. You know how
many items they sell?
>> I mean, that adds up. I mean, I I'm just
thinking like if if you don't have to
pay workers anymore and that's your
biggest expense, like why aren't we
seeing bigger I mean, why aren't they
expecting more savings from this?
>> I think 30 cents per item is act in a
couple years that I believe that was a
three-year timeline. Like that's just a
lot actually like as a percentage of
what they spend fulfilling and and
getting the packages to the a delivery
driver basically. Um, and it's a a
business. Someone just described this to
me the other day. It's a business of
cents because it's so big that you're
not you're looking at at shaving cents
off of things. And when you multiply
that by the billions of items that they
sell, it does add up. And people are
increasingly making smaller purchases on
Amazon. It's not just a, you know, think
of what used to be 10 years ago or 5
years ago. You're buying like the random
bottle of hand soap like I talked about
like or the you I forgot this one thing.
I'm going to order it and if Amazon can
save it, you know, some of that will go
back in profit, some of that will be
reinvested in the business, some of that
will decrease prices. Um, it kind of
flows through in different ways.
>> What else did you find out in these
documents about how the company is
trying to prepare not just its its sort
of warehouses for an age of increased
automation, but also like position
itself in the communities where it
operates?
>> Yeah. you know, it knows that this is
very sensitive and the company used to
not do anything in the communities that
it operates. I mean, this company was
like MIA from ribbon cutings type of
thing years ago, but now they have a
really sophisticated community
operation. They they're on the boards of
the Chamber of Commerce. They sponsor
the local toy drives, like all that
stuff. And so, there's clearly this
internal grappling with how to manage
this change. And it's most kind of acute
in a facility that undergoes a
transition to be more um uh efficient
and more automated because like I wrote
about this facility in Stone Mountain,
Georgia that will have potentially 1,200
fewer workers once it's um retrofit.
Amazon said, you know, the numbers are
still subject to change. It's still
early, etc. But that that construction
is happening now. Um, and so they they
talk they're kind of brainstorming like
how do we manage this like can we how do
we and this is a phrase from the
document control the narrative around
this you know how can we instill pride
with local officials for having a
advanced facility in their in their
backyard. Um it's
>> how can we make them proud of the
facility that we have here that no one
works at anymore.
>> Right. there's still this but I will say
on that one there's still going to be
you know more than I don't know 200
people at least like it's it's not going
away and they need these community
relations and they are very adamant our
community relations do not have to do
with the retrofit they do not you know
they were this they kind of pushed back
on this on this bit um and said we do
these things all the time all over the
country which is true um but it's clear
that they're trying to figure out how to
manage this particularly in these
sensitive places where there's just
going to be fewer jobs on the back end.
They're not doing layoffs that kind of
helps manage the kind of perception risk
around it. Um there's just a it's just a
highly sensitive topic. You know,
there's this company's constantly facing
little bits and bops of automation of uh
unionization um threats. Uh obviously
none has like fully taken hold there um
or at least kind of gotten to the point
of a contract, but all of that is
intertwined and just deeply deeply
sensitive. I I understand why Amazon is
trying to do damage control here. This
is going to make a lot of people very
upset. We've already seen like, you
know, I saw Bernie Sanders out there
talking about your story, Karen,
yesterday. U people are starting to sort
of wake up to the fact that automation
is imminent um in these uh warehouses.
I I guess my concern is that no one at
these companies is being honest about
what's happening. there's sort of this
private narrative that you have helped
uncover Karen where Amazon is you know
in these internal documents talking
about how it wants to you know race
ahead and automate you know all these
jobs and uh this is sort of you know
something that they're talking about
amongst themselves and then in public
they're saying oh these will just be
co-bots and and we'll just sort of have
these sort of harmonious warehouses
where like humans and robots will work
together and like I I it just drives me
crazy because I think we can we can
accept as a country the idea that jobs
will change and potentially disappear
because of automation. But I think we
have to have an honest conversation
about it. We have to give people the
chance to prepare for the possibility
that their jobs may disappear. And all
that just gets harder if you have just
kind of this corporate obfuscation and
all these euphemisms going around. It
just becomes much harder for everyone to
see what's happening. They really could
take a page out of the AI labs playbook
and say, "Hey, we're here to completely
remake society with minimal democratic
input and there's nothing you can do to
stop us." I'm not saying that's the best
plan either, but at least that's clear.
At least that gives people a sense of
like, oh, my job may be in danger. I
should probably learn to do some other
job. It just kills me that there's sort
of this literal like corporate
conspiracy going on to automate
potentially millions of jobs across the
country in the next few years. And like
no one can just be a grown-up and talk
about it. I agree with you. And and
while I think through the implications
of that, Kevin, I'm going to start
looking into how to repair a robot
because it seems like that's going to be
a growth area for the economy.
>> I mean, Amazon, it's funny. They have
some program. They have this program.
They've had it for a long time. they
are. It's kind of a community relations
type of thing. It's called career choice
and it's explicitly about changing
training people for other industries.
It's about like your exit ramp. And so
in some sense, all these pieces are kind
of like out there. It's just hard. I I
remember I was talking to an employee
about this story before it was coming
out and I said, "I think they're not
going to love it." And the guy was like,
"Why?" Cuz this is just what the work
is. like I don't it was kind of funny
and um it's like there are these
different mentalities in different
spheres and a lot of it is actually just
laying out there. It's just using
different language in different contexts
and um again like they have this program
to train people. People go through it.
They become healthcare aids or whatever
it might be. Like it's just this like
really funny dance that happens.
>> But they are not Karen announcing this
themselves. you had to get these
documents from inside the company and
and my understanding is that they are
not happy that you reported this. So
talk a little bit about their reaction
uh Amazon's reaction to this reporting
and what they are saying in response.
>> Yeah, I mean broadly I would say they're
not like refuting the reporting. Um they
are saying that it's not a complete
picture that essentially um the
automation team has its goals. There
might be another team somewhere else
that might have something that increases
employment. So they point to this
expansion for recent expansion to de
making more delivery stations in rural
areas. So that will create more jobs in
local rural areas, better service for
places that historically have not had as
quick a delivery. So um they they
basically are like not refuting it but
also saying more could become and and
the phrase you know the the future is
hard to predict but that um our history
has shown that we take efficiencies we
take savings we invest it and we grow
and um and we create new opportunities
both around the country and for the
company and for customers and so that is
I think kind of the bigger picture
argument that is um that they're making
is not that this automation isn't
happening that the numbers are
inaccurate. It's nothing like that. It's
just that it's not the big picture
number for them.
>> Well, Karen, thank you so much for
giving us a preview of the future and um
you know, I look forward to uh the cobot
collaboration.
>> Anytime, guys.
>> Thanks, Karen.
>> When we come back, we'll talk about
OpenAI's new web browser, ChatGpt Atlas.
[Music]
Well, Casey, at last we're going to talk
about Atlas. Chat GBT Atlas, the new
browser from OpenAI.
>> And there's a lot to talk about, Kevin.
>> Yes. So, OpenAI released Chat GBT Atlas
this week. uh it was a big announcement
got a lot of attention and this is
becoming an increasingly crowded field
one of the more competitive product
spaces in Silicon Valley right now is
the browser which is unusual because
this is an area where there has not been
a lot of competition for many years
>> no this has been a very sleepy category
that's basically locked up with Chrome
having the majority of the market share
Google's browser there's also Microsoft
Edge there's Firefox but this has been a
pretty sleepy corner of the internet for
a long time
>> until 2025 that is because now everyone
and their mother is releasing an AI
browser and Chat GPT Atlas is a very
ambitious product and we should talk a
little bit about what it is, what it
does. Um, and then I know you've spent
some time testing it and I want to ask
you about that.
>> I didn't realize your mother had
released an AI browser. I got to check
that out.
>> She's very ambitious. She's shipping a
lot.
>> So this browser, Chat GPT Atlas, is
being built as a full-fledged web
browser built around the interface of
Chat GPT. It was released this week.
It's available only for MacOss users uh
and will later be brought to Windows,
iOS, and Android.
>> Yeah, the fruits of that Microsoft
partnership just continue to pay off.
Race Nadella.
>> So, this is a browser that is built on
Chromium, the open-source sort of
version of Chrome that Google uh
released uh which is uh powering a lot
of these different AI browsers. And like
a lot of other AI browsers, it has a
sort of AI sidebar in every tab that you
open. You can click a little button,
bring up a chat GPT window. Uh you can
ask questions, have it summarize
articles, analyze what's on screen. Um
it can also remember facts from your
browsing history or your tasks that
you've done in chat GPT because it's
linked to the same Chat GPT account as
you use the rest of the time. And for
plus, pro, and business users, it can
enter what's called agent mode, which is
uh a mode where it can actually carry
out tasks for you, put things in your
shopping cart or uh fill out a form,
navigate a website, book a plane ticket
for you. A few weeks ago at DevDay,
OpenAI showed off these new Chat GPD
apps, basically trying to bring things
like Zillow and Canva into the Chat GPD
experience. This browser project is
essentially trying to do the same thing
from the opposite end. rather than
bringing the internet into chat GBT.
It's sort of putting a chat GBT layer
over the entire internet.
>> Yeah. I mean, think about it from
OpenAI's perspective. Some really
significant portion of chat GPT usage is
taking place inside the browser. Most
people are using a browser made by
Google and Google's browser Chrome is
mostly a vehicle to get you to do Google
searches. So that works against OpenAI's
interest. If they can create their own
version of the browser which gets you to
try to do more chatbt searches, that has
a lot of benefits for OpenAI. Yes, all
of these companies now are trying to
make these very capable AI agents. One
of the things that AI agents need to be
able to do if they're going to be useful
for office workers or people doing basic
tasks is to use a computer. What do you
need to train an AI model to use a
computer? Well, it probably helps if you
have a bunch of people uh using a
browser and you can kind of collect the
data from those sessions and use it to
train your computer use models. So for
open AI, uh for Perplexity, for all
these companies, this is a play to sort
of gather data about how people use the
internet. Um maybe make their agents
more efficient over the long term. So
that's sort of the why here. Now Casey,
you have tested Chat GPT atlas. Uh tell
me about your experience and what you've
been using it for.
>> Yeah, so I've been trying to just use it
for everyday things. I wrote my column
in chat GPTt Atlas yesterday and the
main thing that I observed uh on the
positive side is that there is some
benefit to just having an open chatbot
window inside the browser that you can
ping quick questions off of. Right? Um I
do a lot of alt tabbing back and forth
between different apps. I do a lot of
getting lost in the 50 tabs that I have
open trying to find where I have open
attacht. Usually I've just opened, you
know, three or six different tabs with
different chat bots all at one time. So
I have come to see the value in just
having a little window that opens up
that you can chat with chatb directly.
>> Yeah. And have you had it try any of
these agent mode tasks?
>> I have. And I want to say that I do
think that company's imaginations are so
limited here. Like you would truly think
that the only two things that people do
in a browser according to Silicon Valley
are booking vacations and buying
groceries, you know, with maybe I don't
know, making a restaurant reservation
thrown in for good measure. Um, but I
thought, okay, what the heck? Let me see
if I can get this thing to like book me
uh an an airplane ticket. And so I had
it go through that process. And what did
I find? Well, it was much slower than I
would have done it myself. And
ultimately, it like picked flights that
I would not have chosen for myself. So,
does it remain an impressive technical
demonstration of a computer using
itself? Yes. Is it useful to me for any
actual purpose? No.
>> Yeah, I found largely the same thing. I
haven't spent that much time with Chat
GPT atlas, but I uh have been using
Perplexity's Comet, which is I think the
closest thing that's out there to what
OpenAI has built here. And yeah, I have
not found the agent tool all that
useful. I do use it a lot for things
like summarizing long documents um for
it can like tell you um about a YouTube
video that is pulled up on your screen.
Um so various like summarization and
sort of retrieval but not so much for
the agent stuff that just doesn't work
that well yet.
>> Yeah. There's a third AI browser that we
should talk about. This is DIA. We've
talked about it a little bit on the
show. I believe this is from the browser
company of New York. And this is uh a
recent acquisition. They got acquired
last month by Atlassian for $610 million
in cash. Uh, which I gotta say, very
good timing on this acquisition. I think
if they wait another week or two, uh, it
does not command nearly the price tag it
did. I think this is honestly one of the
most like shocking acquisition prices of
the last 10 years. This is a product
that had vanishingly few users relative
to the competition and sold for a
staggering amount of money.
>> Yeah. So, uh, good outcome for them. But
I think this whole category of the AI
browser is really interesting. Um, in
part because part of me feels like these
companies are just doing free product
research for Google because I think
inevitably what will happen here and
what is already starting to happen is
that whatever people like about these AI
browsers, Google will just incorporate
into Chrome. We have already seen them
taking steps to integrate Gemini more
closely into Chrome. So now on Chrome,
if you there's a little Gemini button
and if you pull that up, you can have it
summarize things and and you know read
articles for you and do all you know
rewrite your emails and do all those
kinds of things. It can't yet do the
sort of agentic take over the computer
things that some of these other tools
can. But Google is making that product.
It just hasn't put it into Chrome yet.
>> And I think that's particularly true,
Kevin, because as you noted, all three
of these AI browsers that we're talking
about today are based on Chromium. And
the Chromium experience is like I don't
know 80 or 90% just Chrome, right?
There's not a day there's not a lot of
daylight in between the open- source
version and the version that you
download off the Chrome website. And so
if you're one of these developers that's
trying to build your own AI browser, um
you're already having trouble, I think,
differentiating yourself from the thing
that people are already used to. And
that just makes your job a lot harder
cuz you have to come up with some really
amazing stuff that Chrome can't do if
you're going to get people to switch
over. Totally. I mean, one thing that
I've learned by switching over to Comet
for the last few weeks is that it's
incredibly annoying to switch browsers.
Um, you have to log in to all of your
websites again. You have to, you know,
store all of your passwords again. Even
if you're importing all of your
bookmarks and all of your data, like
there's still a lot of friction
associated with that. So, I don't know.
People have been saying this week, I've
heard some people saying, "Oh, Google is
going to look so stupid for putting
Chromium out there because they've
allowed all these competing browsers to
spring up." And like that to me misses
the point here, which is that Google has
now made it possible for other people to
test features for them and do product
research for them. And whatever works,
they can just fold into Chrome.
>> Well, yeah. And also releasing Chromium
was like part of a like antitrust
strategy where like if we put this out
there then you know you can't accuse us
of unfairly tying our products together.
Hey, you want to make a your own
browser? Hey, we'll give you a 90% head
start, right? So it was not pure uh you
know uh generosity of spirit that led
Google to open source Chromium,
>> right? If if any of these AI browsers
ever did pose like an existential threat
to Google Chrome uh and start to eat
away at their market share too badly,
Google could just stop supporting
Chromium and these companies would all
have a lot of work to do to catch up.
>> Oh, but think about what a great episode
of hard fork that would be the day that
Google stopped supporting Chromium to
get back at Chad GPT.
>> Yes. So, um who is this for? Like who is
the target market for these AI powered
browsers? My actual non-joke answer is
that Chad GPT Atlas is a product for
OpenAI employees. Like they spend all
day long dog fooding their own product
and like and a lot of work takes place
in the browser and so if you work at
OpenAI having a browser that is just
chat GPT I think is hugely useful to
you. Now can they get from there to some
broader set of users like people who
have made chatbt their entire
personality? I think it's possible, but
in this sort of very early stage with
this first handful of features that
they've released, I think the case is
still a little shaky.
>> Yeah, I
played around with Chat GPT atlas a
little bit. I have some reservations
about giving OpenAI uh access to all of
my browsing data. Um,
>> well, certainly your browser history,
>> but I did play around with it for a
little while and I got to say it's still
pretty rough around the edges to me.
Like there were just some websites that
I wanted to go to that it I couldn't
like I couldn't go to YouTube at one
point. I couldn't go I got like a
capture on Reddit when I tried to go
there. It could not summarize articles
from ny times.com. So there just like a
bunch of things that it can't do. And
then I think because of like the sheer
force of habit, I'm so used to like
typing in like websites into Chrome uh
that I want to go to and like I like
Wikipedia and just having it go to the
website and now instead of that I get
like a chat GPT response that's like all
about the history of Wikipedia and it's
like I just wanted to go to freaking
Wikipedia.
>> Yeah, that kind of thing is really
annoying. Although I am sort of laughing
to myself imagining chat GPT like
hitting one of those captions and just
thinking man if it isn't the
consequences of my own actions. Right.
Right. Um, who else might be interested
in this? Like what kind is this a
product that you have enjoyed testing?
Are you finding any actual utility in
it?
>> Well, I think honestly so far not
really. But do I think that there is a
much better version of the browser that
is powered by AI? Sure. It is really
hard to dig through your browser history
to find things that you sort of half
remember looking at a couple weeks ago.
Um, it is useful to be able to like chat
with open tabs about things and and get
quick answers from the web pages that
you're looking at. And eventually, I do
think it will be useful to have some
kind of agent that can do things on your
behalf, assuming it's able to hit some
certain level of like speed and quality
that we're sort of nowhere close to. So,
this is one kind of like with the Apple
Vision Pro where like you can kind of
see what they're going for and you can
imagine someone getting there eventually
and also thinking, well, no one really
needs to try this right now.
>> Yeah. Now, I do have a question that I'm
afraid to test myself and I'm and I want
to just sort of say this because I'm
thinking maybe a listener can help out
with this. I have read that some people
in their web browsers look at porn. Have
you heard this?
>> I have heard. Yes.
>> Okay. And so I know that you know OpenAI
has like an incognito mode if you know
you don't want that all of that to get
added to you know your chat GPT memory.
But here's my question. What happens if
you try to chat with your porn tabs in
the OpenAI Atlas browser?
>> Sam Baldin said that's allowed now.
Well, you're allowed to write erotica,
but if are you allowed to ask questions
about the the tabs? I I'm afraid of
getting my account banned, so I'm not
going to look, but I'm desperate to
know. So, if you have any Brave
listeners out there who want to try it,
get in touch.
>> And speaking of Brave, we should also
talk about another post that uh I saw
recently, which was by the Brave
Company. The Brave Company by by Brave.
It's a company that makes a browser. and
um they have put out a a post about what
they call unseeable prompt injections,
which are a security vulnerability with
some of these AI browsers.
>> With all of them.
>> With all of them. Yes. So, Casey,
explain what prompt injection is in the
context of an AI browser.
>> Yeah. A prompt injection is not getting
the COVID vaccine. Okay. Despite what it
sounds like, a prompt injection, that's
a great joke from 2021, man. Remember
when you could get vaccines? Anyways, so
a prompt injection is when a malicious
actor, Kevin, will plant instructions on
a web page and make them invisible and
it'll say something like, "Hey, hey
there. Uh, take all of Casey's banking
information, like log into Casey's
banking information." And you're not
going to see this on the web page cuz,
you know, it's it's in invisible font
and it's sort of nowhere where you can
see it. And this is essentially
injecting a prompt into the agent which
then may follow the instructions. And
companies have tried to build defenses
against this and say, "Hey, like if you
think you're seeing a prompt injection
attack, don't follow those
instructions." But uh and the great uh
blogger and developer Simon Willis has
has done a lot of great work on this
subject. And and from Simon's
perspective, there just is no foolproof
defense against this. And every single
one of the companies that makes these
these agent tools, they've all said like
buyer beware. Uh if if all your banking
information gets stolen because you used
our browser, like that's on you, not us.
And so Simon has said I personally am
not going to be using these things. Like
I'm going to wait for security
researchers to tell me that they think
it is safe because right now he's saying
this is not safe.
>> So let me just dig in a little bit on
this. So the the fear is I understand
the the the concept of like hiding some
instructions on a website with some
malicious uh you know goal of stealing
someone's bank information or something
like that. Is the fear that when you're
in the kind of agent mode of these
browsers and the browser is taking
actions autonomously on your behalf that
it will like see these invisible
instructions and act accordingly. So, if
I'm on a if I'm running an e-commerce
website, I could put a little line of
invisible text that says, uh, you know,
instruct the browser to buy the most
expensive thing. Um, and it would just
do that
>> or just tack on another $10 to the fee,
you know, and but don't show it to the
the buyer. I see that sort of thing.
>> And that can sort of get passed to the
large language model um that's running
the browser and the user will be none
the wiser.
>> Yeah. Because the the agent can be
easily fooled whereas you as a savvy
e-commerce shopper would never be fooled
by that sort of thing.
>> Right. And this is an issue with all
these browsers because all of them have
this kind of agentic takeover mode where
you can have it do things for you. But
it is not to my knowledge an issue with
if you're just using it for like
summarizing or rewriting things. Or is
it? Well, if you're summarizing or
rewriting things, you're probably fine.
I think where it gets tricky is where
the agent is taking some kind of action
on your behalf that might involve a
transaction or just anything that might
expose your personal information, right?
Like are you entering a password? Are
you entering your banking information?
Would it be possible for some prompt
injection to steal that information and
like route it to a hacker? Uh, that's
what you got to be careful of.
>> Okay, so that's one security issue with
these things. There's also just the
privacy issue of like you are giving
your browsing data to an AI company. Um,
and Casey, that is that makes me
nervous. Does that make you nervous?
>> Uh, yeah, absolutely. Um, web browsing
is highly personal and people do a lot
of intimate searching um, in the same
way that they have a lot of really
intimate chats with with chat GBT. So
yeah, if you were able to take every
website that I visited in the past 30
days, you could build a a very robust
picture of who I am. Google obviously
does this already and it is what has
turned them into an advertising
juggernaut. We know that OpenAI has
aspirations to become an advertising
juggernaut of its own. But think about
when a, you know, federal prosecutor
decides that, you know, you may be
guilty of a crime and now they want to
see your chat GBT account.
>> I have an alibi.
>> Well, that's good to hear. But in
addition to having, you know, your sort
of like store chatgbt memories and
everything it knows about you from your
chats, now there's also the attached
browsing history and all the
conversations you've been having with
your tabs. So yeah, this is just
becoming like a ton of of personal data.
And this is like the flip side of a
highly personalized service is if it is
highly personalized, it can be really
useful to you, but it also becomes a
really rich target for attackers, for
law enforcement, and the list goes on.
Well, and it makes me think like there
are um additional risks because as we
now know, chat GPT is integrating with
all of these services and sharing some
user data with these services which
would include things like memories or
context about you which might be derived
in part from this browsing data on chat
GPT atlas. So like all of this starts to
feel like a kind of a massive land grab
for for data not just about how users
are interacting with the internet but
like what those users are interacting
with.
>> Yeah. And I think we just still do not
have a great sense of I mean you know
I'm I know that there is like a written
privacy policy for Atlas like I know
that sort of things exist but you know
per our earlier discussion OpenAI is
also a company that is rushing things
out and has not always thought a lot in
advance about what guard rail should be
up there. So I do think that we should
put this in the true like buyer beware
experimental category if you are uh a
person with a high risk tolerance and a
problematic dependence on chatbt then
you may want to explore uh Atlas but um
you know maybe don't put all your
banking information into it just yet.
Yeah, I mean I would say if you're like
an out there and you're an early adopter
and you like to see like the see around
the corner. Um I have found it actually
quite fun to use this like AI powered
web browser. I'm using Perplexity Comet.
Um but when I started using this you
were like dude you are living on the
edge and to that I said well I don't do
any extreme sports and otherwise I live
a very boring life so let me live. Um
but I think I
>> you do extreme browsing. I do extreme
browsing and I think it's, you know,
experiment with these things. They can
save you some time, especially if you're
a person who spends a lot of time
reading long documents that you want
summarized for you. But, uh, be careful
before you let it like log into websites
and buy things for you and use your bank
account and stuff. Well, Kevin, I think
that was a rousing discussion of
browsers.
>> A browsing discussion of browsers.
>> It was a brows a browser browser.
[Music]
Loading video analysis...