Why A.I. Is Making You Exhausted
By Hard Fork
Summary
Topics Covered
- AI Shrinks Intelligence Haystacks
- Claude Drives Real-Time War Targeting
- Data Centers Become Prime War Targets
- AI Brain Fry Hits Heavy Users
Full Transcript
All right, Kevin, let's get into the biggest news of the week, which is the war in Iran. Specifically, we want to talk about what we know about how AI is being used in this fight.
>> Yeah. And I think the reason to talk about this uh is not just because it's happening. It's the biggest story in the
happening. It's the biggest story in the world, but also because I think this is really a turning point in the use of AI in the military. we've been hearing for
years and reading science fiction books and listening to people talk about the use of AI in military applications, but now I think we are starting to see exactly how these tools are being used
on the battlefield and what kind of effects they might be having. We are and I'll say up top that anytime you're talking about the use of technology in war, there is always the risk that you
are just passing along propaganda, right? because both the military and the
right? because both the military and the contractors have a vested interest in telling you, hey, we have some real g- whiz new stuff and it's totally changing the game, right? Everybody has an
incentive to tell you that. And yet, as you and I have dug into it, we do believe that there are some notable ways that AI are being used and I think it is worth mentioning them. If for no other
reason than I think it's been the experience in the United States over the past couple of decades that tools that are deployed abroad during times of war sometimes come back home uh after the war and wind up being used against
American citizens.
>> Yeah. So I think we should tease apart a few things here. One of which is like let's talk about how the actual AI tools are being used by the military, what the tools are, what the kind of ramifications of using them this way
are. uh we should talk about how Claude
are. uh we should talk about how Claude in particular uh seems to be a key part of the war in Iran so far and at least from what we know seems to be behind a lot of the strategic decisions and
operations that the military is making and finally about how this conflict is or isn't going to reshape the future of AI by doing things like taking aim at data centers by interrupting the supply
chains of things like semiconductor materials all the larger questions about how this conflict is playing out. And
before we get into it, let's briefly do our disclosures. My fiance works at
our disclosures. My fiance works at Anthropic >> and I work for the New York Times, which is suing Open AI perplexity and Microsoft over alleged copyright violations. Okay, Kevin. So, where
violations. Okay, Kevin. So, where
should we begin? Well, let's talk about how AI is actually being used in the war in Iran um and what we know about the actual deployment of this stuff. Casey,
what do we know? Yeah. So, I read a great overview this week in the Wall Street Journal by Daniel Michaels and Dove Liieber, who goes into good detail
about what we know about how the United States and the Israeli militaries are using AI. They're upfront about the fact
using AI. They're upfront about the fact that the military is trying to keep a lot of this secret. Uh they are not apparently going into a lot of detail, but there are some things that we know.
One is that Israeli intelligence for years had been monitoring uh traffic cameras in Tehran that they had hacked into um and also eavesdropped on senior
officials communications. And this is a
officials communications. And this is a big theme, Kevin, that runs through all of the coverage of AI in the war in Iran, which is that uh the military is saying that it is very effective, as you
would probably imagine, at processing large quantities of information.
>> Yeah. So basically, you've got all this data coming at you if you're, you know, running a a military in the year 2026.
You've got data from drones and sensors and maybe security cameras that you've found a way into. Uh, and you can kind of use AI to process all of that to put
it onto some kind of like a real- time dashboard so that you can just like open a screen and kind of see where all your supplies and all your troops and where all the enemy combatants are and like use it to sort of make sense of this
wave of information that is coming at you every day.
>> Yeah. You know, recently on the show, as we've been talking about the conflict between Anthropic and the Pentagon, we've been talking about the uh potential eventually to have autonomous uh weapons out in the battlefield,
potentially killing people without human intervention. And the big message that
intervention. And the big message that I'm reading in the cover so far is we are not there yet, right? that the the AI tools that are being used uh we're seeing them in fields like intelligence,
mission planning, logistics actually pretty far away from the battlefield doing things like helping to find a target to send a missile at and then after an attack trying to do some kind
of quick analysis to see, hey, what exactly did we hit and maybe what should our next target be? It's also really clear that what's happening in the military is what I would call like
shrinking the hay stacks where there's there's sort of these massive troves of data where it's like we have, you know, hundreds of thousands of uh phone calls
or audio recordings or emails or intercepted traffic to uh Iranian websites and we can like use that AI to kind of narrow down the bits of that
that might be useful to us. Uh because
in all intelligence gathering situations since the dawn of eternity like 99 plus% of what you're collecting is totally useless and there have been you know entire divisions of humans who have been
employed to like dig through all that stuff and find the stuff that's actually useful and now AI can do that pretty well.
>> Yeah. And military leaders are saying that there are many many missions that just never happened because they didn't have the manpower to do exactly what you just said and now they do. And I would point out Kevin that again, you know,
when in our whole discussion of anthropic versus the Pentagon, we were talking about, you know, the risk of of this technology being deployed against Americans and how effective that could be in, you know, all sorts of surveillance operations. So, I think
surveillance operations. So, I think it's important to highlight like that exact thing that we were talking about like sort of like a bad scenario in the United States if the government was doing it to its own people is just sort of absolutely happening right now in Iran.
>> Yeah. And we probably won't know the extent to which it's happening because most of it is classified and uh you know nobody in the military wants to like give away their secrets to to any
potential adversaries. But my best guess
potential adversaries. But my best guess and from the people that I've talked to who have been working on this stuff is that this is happening pretty rapidly that we are seeing many many divisions of the military that are essentially
using this stuff every day.
>> Yes. Now one question that is coming up a lot is to what extent if any is the military starting to offload decisions to AI right is it the case that there is
some military commander that is typing into a chatbot hey should I send the missile here or there you's public statements are that they are not doing this right they are sort of taking care
to say no like humans are in the loop here we are relying on human judgment but there are other experts that are saying you know at some point if you're going to be consulting with a chatbot and the chatbot is getting smarter and
smarter, it before too long, it's probably not going to feel very different from the AI actually just making the decision for where to shoot a missile.
>> Yeah, I think that's a really good point. I think there is a difference
point. I think there is a difference between a fully autonomous weapon that can sort of do everything from selecting the target to like firing the the weapon uh all on its own with no humans in the loop. But I think what you're talking
loop. But I think what you're talking about is sort of a system that can do everything except fire the weapon. It's
it can sort of select the target. It can
tell you the right timing. it can like identify all the objects in the surveillance footage. Um, and it can
surveillance footage. Um, and it can kind of give the military officials the confidence they need to go ahead and push the button. And there's some worry that this is starting to happen with the
help or the encouragement of AI. U,
there was a missile strike uh in Iran that hit an elementary school the other day and according to Iranian officials killed over 175 people, mostly children.
Horrible thing. And people have been wondering if that was related to Claude or some other AI system telling the military maybe erroneously that this was
a legitimate target when now we should say that particular incident is still under investigations and initial reports from the military have said that it was
unlikely that AI was responsible in that case. But I think this is the kind of
case. But I think this is the kind of thing you're going to start seeing more and more of is like when there is uh an an attack that you know kills civilians or doesn't hit its intended target,
people are going to be asking, "Oh, was that a human who made that mistake or was that an AI system?" Yeah. And I have to imagine, Kevin, that there is just going to be more and more pressure within the military to more fully defer these decisions to AI systems, right?
Because at some point there will at least be some contingent in the military saying these systems are more trustworthy. they can make decisions
trustworthy. they can make decisions faster and and let's do it. So I think that's just something that we need to be very much uh on guard for.
>> Yeah. So that is what we know about how AI systems uh have been deployed so far.
But Kevin, as you mentioned, there's also been a lot of discussion about uh well what some particular models may or may not be doing during the war.
>> Yeah. And I think Claude and Anthropic have come up a lot in recent weeks for obvious reasons. They had this big fight
obvious reasons. They had this big fight with the Pentagon. But it's also the case that right now in this war in Iran, Claude is the only AI model that has actually been deployed inside classified
military systems. So to the extent that AI is having an effect in Iran, it is probably Claude. Yes. And the Washington
probably Claude. Yes. And the Washington Post had a story um about AI and the war in which they said that Claude was so essential to operations that if for some
reason Anthropic said, "Hey, we want you to stop using Claude." the military would push back and say we're actually going to force you to continue to use this product. So just again the
this product. So just again the continued strangeness of the situation.
The Pentagon has now formally declared Anthropic to be a supply chain risk.
This week Anthropic sued over that.
Yeah. And there's also been a lot of reporting coming out over the past week or two about the actual ways that Claude is being used and deployed in the military. There's been some reporting on
military. There's been some reporting on this system built by Palunteer called Maven Smart System, which from what I can tell is kind of a real-time dashboard for intelligence that
basically allows you to pull in a bunch of drone footage and sensor data and track a bunch of, you know, supplies and troop movements and things like that.
And by the way, this is the system that caused a huge controversy at Google in in the late 2010s. And, you know, Google's like quit over this. They did
not want the company involved with project Maven. Uh and eventually Google
project Maven. Uh and eventually Google dropped the contract. When they did, Palanteer stepped in and eventually brought on Claude. Right. And so Claude has been integrated into Maven smart
system since 2024.
And the reporting that I've seen over the past week, including in this article in the Washington Post, um said that this combination of the Maven smart system built by Palanteer and Claude has
already suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance. Um and according to this
importance. Um and according to this same article, it says that the use of Maven and Claude has turned weeksl long battle planning into real-time operations. So, this is not just like a
operations. So, this is not just like a kind of tool that people in the military are using for handling like routine office work. This is actually sort of a
office work. This is actually sort of a core part of their strategic decision-making process. Now, Kevin, do
decision-making process. Now, Kevin, do you know if this is like um a like specialized model of Claude? Again, I'm
thinking back to our conversation with Amanda Ascll where we where she talked about all these efforts to make sure that, you know, Claude is really good.
I'm sort of imagining that version of Claude being told like, "Hey, analyze all this footage and decide like where to send a missile to kill a bunch of people." It's hard for me to imagine
people." It's hard for me to imagine that version of Claude being like, "Yeah, yes, sir." right away. Right. So,
do we understand at all how that how that is working? So my understanding is that it is largely the same model that consumers and enterprises would use. Uh
but that there may be some additional fine-tuning to make it work inside these classified systems on these sort of military applications. Um that it may
military applications. Um that it may sort of refuse different prompts or or fewer prompts than a model aimed at consumers. Um and that there may be some
consumers. Um and that there may be some additional kind of changes around the edges, but that it's basically the same cloud that you and I have. Hm. I see.
Well, so this appears to be a very temporary phenomenon. We know that
temporary phenomenon. We know that OpenAI has signed a deal with the Pentagon and presumably its systems will be onboarded onto uh classified defense systems soon. Gemini was approved for
systems soon. Gemini was approved for non-classified uses at the Pentagon. So
I think pretty soon uh the Pentagon is to have uh more options to choose from as it deploys these systems. So that is how AI is being used uh offensively by
the United States and uh Israel, Kevin.
Uh but we should also talk about what Iran is doing offensively against some of these AI systems. >> Yeah, this is a part that I have not spent as much time looking into. So tell
me what you're seeing.
>> Well, so as you know, there's been this huge buildout of AI infrastructure throughout the Middle East over the past several years. We've seen these
several years. We've seen these multi-billion dollar projects being signed and built in Saudi Arabia and United Arab Emirates and Qatar. Uh and
these deals involve basically all of the big American uh tech giants, Amazon, Microsoft and Google. And I would say there's sort of like two major pieces of infrastructure that are relevant here.
One is data centers, right, which uh are, you know, being used to uh run AI systems and also just provide basic cloud hosting and storage services to
all sorts of companies. And then you have fiber optic cables which connect those data centers to the rest of the world. Um so let's maybe talk about the
world. Um so let's maybe talk about the data centers first. Sure. So, the
Guardian reported that on uh the morning of March 1st, which was the day after the initial US attacks in Iran, Iran responded by uh striking a couple of
Amazon data centers in the UAE. And they
also damaged a third one in Bahrain. And
in the immediate aftermath of that, people uh in those countries were opening up their phones and they couldn't check their bank balances, they couldn't order a taxi. it seems like a
lot of services in those countries were being hosted on AWS and they just didn't have access to those services anymore.
Um, afterwards, Iran put out a statement that said that they had gone after the data centers to identify the role that they played in supporting the enemy's military and intelligence activities.
That's so interesting. So they were basically targeting data centers rather than say troops because they thought it could actually be more disruptive if it turned out that the US or Israel or any
of the other allied nations were running their services on data centers located in the Middle East.
>> Yeah. Well, I mean and also like data centers are a great target. Like they're
just sitting there. They don't have any defenses, right? So you can just send a
defenses, right? So you can just send a few missiles over there and do an asymmetric amount of damage. And so now Kevin, people are starting to question the logic of doing all these
multi-billion dollar deals in the Middle East. They're saying, "Hey, should this
East. They're saying, "Hey, should this really be a lynch pin of global AI infrastructure if it's just kind of a rough neighborhood and all of the investments that you're going to build there are just going to be kind of
perpetually at risk?"
>> Yeah, I think that's a really interesting sort of tactical shift that just speaks to how central all of this AI stuff has become in military conflict. And then you have all these
conflict. And then you have all these other risks of disruptions to the supply chain. And right now there are lots of
chain. And right now there are lots of ships stuck that can't get through this trade of Hormuz because it's been blocked off. And we now have people and
blocked off. And we now have people and companies saying that some of the raw materials that you need to make things like semiconductors might be delayed for weeks or months or however long this
conflict lasts. Um and that prices might
conflict lasts. Um and that prices might go up and it might get harder for companies to build new data centers here in the US. So all these ripple effects we're starting to see are like downstream from the fact that we're at war with Iran.
>> So that's what's going on with the data center infrastructure. Kevin, you're
center infrastructure. Kevin, you're also probably wondering what is going on with these under sea cables. So there
are a very important fiber optic cables that run through the straight of Hormuz that are responsible for transporting internet traffic uh from that region to
the rest of the world. Uh, as of press time, uh, as we record this, these lines have not been attacked or disrupted, but everyone is keeping a really close eye
on it because were they to be disrupted, there is just simply no obvious way to fix them in the middle of a live war.
Casey, how does this all make you feel that AI is playing such important and central role in an ongoing war in Iran?
>> I mean, this to me just feels like the frog is being boiled, right? Like when I think of all of the potential violent
uses of AI, data analysis is not among those that gets me most nervous.
Although of course I do have concerns about, you know, domestic surveillance.
I also know how rapidly these systems are advancing. I know the pressures that
are advancing. I know the pressures that are quite apparent in our military to use AI for ever more things. I worry
that there aren't going to be appropriate safeguards on those things.
And so, yeah, I just have a high degree of concern about where all of this is going. I'm open to the idea that AI
going. I'm open to the idea that AI systems could be used to wage war more safely and to maybe even prevent casualties, but I am not sure that we have built systems that will actually do
that. Yeah. And I would just say like I
that. Yeah. And I would just say like I keep thinking about how all of the companies that are building frontier AI systems today at one point in their
existence had decided that they didn't want their stuff being used by the military. You know, back in 2014 when
military. You know, back in 2014 when DeepMind was a sort of little known AI startup in London, they sold themselves to Google. And one of the major sticking
to Google. And one of the major sticking points in those negotiations, one of the reasons they sold to Google and not what became Meta and was at the time Facebook
was that Google had allowed them to have this prohibition on using their technology for military applications or surveillance.
As recently as a couple of years ago, Google's AI principles said that we are not going to allow our technology to be used for the military. And in 2025, it
quietly took that language out. Open
AAI, same thing. They had language in their terms prohibiting their models from being used for military applications. They took that language
applications. They took that language out quietly in 2024. Meta, same thing.
Anthropic, interestingly, is the one sort of frontier AI lab that never had an explicit prohibition on military applications, but they did have a bunch of language in their original terms that
they have amended to make it more possible for the military to use this stuff. And so like I understand
stuff. And so like I understand strategically why you would make the decision to sell your AI tools to the US military, but I just don't want us to forget that like all of these companies
were run by people who at one point thought this was all a bad idea to be selling these very advanced AI tools to the military. And then they changed
the military. And then they changed their minds and they did that because of some combination of pressure or just maybe market opportunity to get these big military contracts. But they did at
one point have a principle that involved we don't want our stuff being used to kill people. And I would like them to at
kill people. And I would like them to at least reflect on uh the fact that that has changed. Yes. And for everyone else,
has changed. Yes. And for everyone else, the next time one of these companies tells you about some unshakable principle that is the foundation that the entire company is built on, it should make you wonder whether that can
hold up to pressure as well.
>> Yeah.
So Kevin, I feel like there is this new genre of blogs and social media posts all devoted to the idea that using AI is making people feel completely exhausted.
>> Yes. And insane.
>> There's a spectrum and it starts at exhausted and it goes all the way to insane. uh Sidon Carr uh who's an
insane. uh Sidon Carr uh who's an engineer who builds tools for AI agents wrote a blog post that I saw all over social media recently called AI fatigue is real and nobody talks about it and he
said that on one hand he felt like he'd had the most productive quarter of his entire life as he uses all these new agentic coding tools but on the other hand he said he had felt more drained than ever before in his career. Yeah, I
think people are starting to sort of use these tools more and come to grips with not only the effect it's having on their productivity, but also like on their brains and on their ability to kind of
make sense of how quickly things are shifting. I really liked this essay that
shifting. I really liked this essay that a venture capitalist wrote a few weeks ago about what he called token anxiety, which was this feeling that like if you don't have a bunch of, you know, cloud
code agents like running parallel tasks for you uh while you sleep, like you're you're feeling like you're missing out.
And people at dinner parties in San Francisco are now talking about bragging about how many agents they have running at all times. So there there's like something psychological happening to the people who are using this stuff a lot at
work. Absolutely. And recently we have
work. Absolutely. And recently we have begun to see some actual empirical research on the subject. So last month researchers at UC Berkeley published some findings in the Harvard Business
Review uh from an eight-month study observing workers at one 200 person tech company. And they found that AI was just
company. And they found that AI was just making work a lot more intense. Workers
were having to multitask a lot more.
They felt like if they were not using a lot of AI tools, they were not keeping up with expectations and that they used to have little breaks during the day uh where, you know, you go to the water cooler and talk about, you know, uh
what's going to happen on Survivor this week. Well, that doesn't exist anymore,
week. Well, that doesn't exist anymore, at least not at this company. And then
last week, a group of researchers at BCG shared some similar findings in the Harvard Business Review. And this one really caught our eye because they found that under certain conditions, workers
are experiencing what the researchers are calling AI brain fry.
>> And to be clear, that is different than AI brain rocked, which is what you get on TikTok when you start looking at videos of Ballerina Cappuccino.
>> That's right. You know, and actually they thought that Emanuel Mcronone might have this, but that turned out to be AI French fry. So anyways, here's what AI
French fry. So anyways, here's what AI brain fry is, Kevin. They're defining it as mental fatigue from excessive use or oversight of AI tools beyond one's
cognitive capacity, which I think is kind of a funny idea. It's almost like you got a new coworker and they're really, really smart and it's sucking
your life force out of your body. So, we
want to know more about this study because I think it gives shape to a conversation that we're seeing rippling out across the economy as more and more managers are telling their workers to
start using AI tools. It is clear that not all is well out there. people are
starting to feel kind of bad and they're maybe going to be less productive and likely to leave their jobs as a result.
So to learn more about the findings in this study, we've invited the lead author, Julie Baddard. Julie is a managing director and partner at Boston Consulting Group as well as a fellow at
the Henderson Institute, which is an internal research group and think tank at BCG. So let's bring her in. Let's do
at BCG. So let's bring her in. Let's do
it. Let's get fried. Julie Bernard,
welcome to Hardfork.
>> Thank you. So, thanks for having me.
>> So, let's talk about the study. You
surveyed uh 1488 workers in January of this year from all different disciplines, lots of different companies. Um, what kind of questions
companies. Um, what kind of questions did you ask these workers?
>> Yeah, we asked them all kinds of questions around how they use AI, how they feel at work, you know, traditional burnout metrics. We asked some, you
burnout metrics. We asked some, you know, sort of proxies for cognitive ability. And we did throw in a question
ability. And we did throw in a question about AI brain fry. We said specifically like what do you think about this thing that could be AI brain fry? Like are you feeling that?
>> And tell us how you define AI brain fry and what the survey results told you about it.
>> We defined it as really like a type of cognitive strain. So we said it was
cognitive strain. So we said it was mental fatigue. It was related to
mental fatigue. It was related to excessive use of interaction with or oversight of AI and it was about being beyond one's cognitive ability. So it's
sort of like I'm using the tool but it feels beyond my ability to process it.
So 14% of people um who use AI said that they felt this and I was especially surprised by the extent to which they told us about it. We asked you know freeended like just tell us what is this
thing? What does it show up? How does it
thing? What does it show up? How does it feel to you? Um, and people wrote a lot, right? Like they wrote all these things
right? Like they wrote all these things about feels like I have 12 browser tabs open in my head or it feels like I'm working so hard to manage the tools. I'm
actually not really doing the work. Like
I'm not actually managing what I'm supposed to be doing.
>> I I thought this was so interesting because on paper, if you told me, "Hey, we're going to give you a brilliant new assistant. They can answer all of your
assistant. They can answer all of your questions. They can do many of the tasks
questions. They can do many of the tasks that you uh prompt it to do." That would sound very exciting. You know, sometimes I think what would it be like to have like a really great podcast co-host? You
know, somebody kind of came in really prepared, asked a lot of great questions, had a great energy.
>> You'll never know. And I'll never know.
Okay. But some of these people at work are now having that experience. But what
you're saying is that that is not an energizing thing for them. It's draining
them in some way. So what like what do you think is the mechanism by which people are coming to feel so exhausted by working with these systems?
>> Yeah. Well, I I do think it's particular to these two things that we found um which is the oversight of the tools and the intensification of work due to AI.
And what people reported specifically is they put in more mental effort, they felt more fatigue and they felt information overload. And you know, we
information overload. And you know, we need more research, right? Like this is new and we're learning. But my
hypothesis, right, from working with a lot of different companies on this kind of thing is it is fun and exciting combined with we feel more pressure.
Everybody's talking about AI, AI, productivity, right? And I think it's
productivity, right? And I think it's it's just nature to okay, one more thing. Let me just sort of try this out,
thing. Let me just sort of try this out, see what I can do. And we're not reentering on like what was I actually trying to achieve today, >> right? We're not getting focused on some
>> right? We're not getting focused on some of the most important aspects of our work.
>> Yeah. I'm curious how much you think this really boils down to fear. Um
because when I talk to people who are anxious about using AI at work, um they sort of they circle around this issue that like maybe it's materializing as
burnout or feelings of overwhelm, but like at at its core, what they're nervous about is that we now have these systems that can do parts of their job and they're worried about losing their
jobs. And so I'm Did anything in your
jobs. And so I'm Did anything in your study sort of get to any of the the the economic or sort of survival anxiety that these workers might have been feeling that might have been registering
to them as burnout but deeper were something else.
>> Yeah. So this is probably a good time to to separate the two because the brain fry is the cognitive piece. Burnout is,
you know, physical and um mental exhaustion. It's more emotional. It's
exhaustion. It's more emotional. It's
more about how I feel about work and and you know, do I feel like I'm doing a good job at work?
burnout. We did not find a correlation with brain fry. So, I just want to be really like clear. It was very interesting. I thought we would, we did
interesting. I thought we would, we did not. Brain fry is distinct. And then
not. Brain fry is distinct. And then
what we found is actually you could use AI to reduce burnout. So, if you use AI for more repetitive tasks, you actually were reducing burnout. So, you're
actually making workers feel like better at work. And that is something that's
at work. And that is something that's really positive that we need to lean into. Um, so there's a lot of nuance.
into. Um, so there's a lot of nuance.
Maybe the last thing I would say is we did look at, you know, how positive or negative you feel, but typically the people who are afraid are not the people who are doing heavy oversight work in my experience,
>> right? So they're sort of the people who
>> right? So they're sort of the people who are, you know, leveraging it more like a search tool, right? They're not
necessarily getting up that learning curve to more of the intensive interactions. Mhm. In your study, you
interactions. Mhm. In your study, you found that people in certain industries tended to experience AI brain fry more frequently. Um, I was struck by
frequently. Um, I was struck by marketing seems to be the the the place where people are feeling it the most.
Um, and people in areas like management and law and compliance reported significantly less brain fry. Do you
have a theory on why that is?
>> Yeah, so the short answer is unfortunately our survey at least scientifically was not designed to answer that question. But I have my theories based on other work that I've
done. And you know three years ago I
done. And you know three years ago I worked with um some of the models to try to predict skill disruption. I was
trying to figure out like which jobs will change the most. And one of the jobs that changed the most from a skill perspective was marketing manager.
>> A marketing manager was 90% disrupted from a skill perspective. So, so that's sort of the first fundamental piece about marketing is like they've tended to adopt and it's a really different way
of working because of the power of the tools. The next thing if I really just
tools. The next thing if I really just think about like what is brain fry like it's about the iteration, it's about the oversight. A lot of marketing lends
oversight. A lot of marketing lends itself to that. Like in the field we see stories of folks who are doing image creation, they're doing synthetic consumer panels, right? They're spinning
up a bunch of campaigns at the same time. And it really lends itself to that
time. And it really lends itself to that definition of like when do they know they're done? When do they know the
they're done? When do they know the image is ready? Like have they defined those success thresholds for themselves?
I'm guessing they haven't yet, right?
Like they haven't figured out how do you do all the things to the right level of quality based on the outcome that you're trying to drive for. It
>> it makes sense to me that like the more your job is changing, the more kind of vertigo you're going to be experiencing as these new tools are introduced into your workplace. You know, Kevin, you
your workplace. You know, Kevin, you just observed that managers seem to be experiencing this less. One of my theories was that well, the reason is because they're already used to overseeing a bunch of digital
abstractions since they're human employees, right? They're mostly just
employees, right? They're mostly just sending them Slack messages and sending them emails, uh, you know, hopefully meeting in person, uh, you know, fairly regularly. But I think if you're a
regularly. But I think if you're a manager, you've already been used to sort of overseeing a bunch of stuff and those people just sort of may have skills that uh people who have not yet been in management roles don't have. I I
think there's a there's something to that. And I also wonder, Julie, if you
that. And I also wonder, Julie, if you think there's anything that is sort of inherently isolating about these tools, one thing that I found with using AI for my own work is like it's a single player
video game, right? You're you're going back and forth with a machine. very
rarely am I in a room with other people using AI with them. And I wonder if part of the brain fry is sort of this siloing effect that these tools tend to have in the workplace where it's like everyone
is chatting with their chat bots and their agents and no one is talking to each other.
>> I'm glad you brought that up, Kevin, because back to this point around there's ways to use AI that actually reduce burnout. the people who were
reduce burnout. the people who were using it for repetitive tasks, they actually were doing those types of things. Like we found that they felt
things. Like we found that they felt more socially connected at work.
>> And so it's interesting like in in all the companies that I go to, I do various types of, you know, AI enablement and workshops. And one of the questions that
workshops. And one of the questions that I always get a lot of engagement on is what could you use AI for, which is like the three worst things on your to-do list, like the procrastination things, like the things you really wait and do.
I mean, people love to talk about using AI for those. And my hypothesis is sometimes that's probably the repetitive work. And when you use it for that type
work. And when you use it for that type of repetitive work, you actually reinvest the time in things that give you energy. So more work needs to be
you energy. So more work needs to be done. But but I think I've seen that a
done. But but I think I've seen that a bit in the field and and that's what our data would suggest as well.
>> I want to ask about the the three tool cliff, which was a funny part of your um your study. Basically, you found that
your study. Basically, you found that the sort of number of AI tools that people are using at work uh has some sort of bearing on their productivity or
their feelings of productivity. And that
actually when you switch from using three to four AI tools at work, um there's something that happens where you all of a sudden start experiencing these things is not like a productivity enhancer but actually just more of a
stressful thing. Do you have a theory on
stressful thing. Do you have a theory on why that is or why there seems to be this threshold? Classically multitasking
this threshold? Classically multitasking is not very productive, right? Like we
all are, you know, seduced by the idea that we can do more and more and more.
>> Casey's playing right now.
>> Exactly.
>> I am not.
>> So yeah. No, I I think multitasking is part of that. But it's back to this point of like I'm overseeing more things >> like I'm actually doing more things. I'm
starting more things. I'm stopping more things. I have more output to govern. um
things. I have more output to govern. um
you know advice for leaders and managers are to help people understand this. Like
one of the things I'd love to see is AI fluency right now mostly was defined by technical skills. Maybe in the last 6
technical skills. Maybe in the last 6 to9 months we've started to talk about the human skills that persist. I
actually think cognitive sort of uh health should be part of defining AI fluency as we go forward. So both again like individuals like I can start to work differently with the tools but also
again managers and leaders can can help protect against that.
>> Let me ask uh one objection that that some people might have to the research.
You work for a consultancy. Uh
consultants have an interest in making AI seem difficult so that companies will hire them to help manage it. Is there
any chance that we're over pathologizing what is going on here or sort of, you know, giving a scary sounding name to what might just sort of be a a temporary adjustment process as people, you know,
start to use AI tools in the workplace?
>> Yeah, I'm glad you've asked that. You
know, maybe what I would say just first about kind of how I look at this and why I'm doing this research. So, um, I am a consultant. Yes, I do advise companies.
consultant. Yes, I do advise companies.
It's sort of the the bread and butter of what I do. Um, however, I'm also a researcher and I care really deeply about the data. And what's been very hard is our clients have wanted answers.
Answers that we don't necessarily have all of the playbook for because it's so new and is changing so rapidly. Um, so
I'd say just, you know, we really designed this to be a datadriven intervention. But beyond that, I think
intervention. But beyond that, I think I've been for, like I said, for the last three years at the rock face. Like I've
talked to more than 100 companies. I've
actually trained teams myself. I've been
in the room with software developers, marketers, etc. trying to use these tools. Um, and I see that like there's
tools. Um, and I see that like there's something there like there there's a real strain where I'm trying to do the right thing, but something's getting in the way of me being productive with the
tools and we need to redesign work hopefully and particularly of uh, you know, within teams to do that better.
And like if you're a worker out there, if if people are listening to this and saying, "Yes, I I am a worker. I am
using AI tools at work. I am feeling the the brain fry that you are describing."
Um what can they do to help themselves?
What what has shown itself to be effective in your experience?
>> Yeah. So if you're an individual worker, I think first just acknowledging that this is a risk, right? Um is the first thing. The second thing is really
thing. The second thing is really focusing on what you're trying to achieve. It's like back to that outcome
achieve. It's like back to that outcome piece. I mean, I know this is really
piece. I mean, I know this is really basic, but if we were very clear about we're measuring outcomes, not output, and we're trying to get to the right
answer, and what are those steps to help me get there? And so, you know, from our data, we would say the things you could do is one, engage your manager. So,
managers who engaged in questions, we saw brain frag down. And I think it's about creating that sort of open dialogue about how should I use III?
When is it valuable? The other thing is to engage your team on this. So
interestingly, when teams were using AI together and they had better integrated into their workflow. So like how I hand off work to Kevin and Kevin does to
Casey, we also saw brain fry go down.
And you know, I don't have the data to say exactly why, but my hypothesis would be is we're not bottlenecking work in one person and we're creating actually like a much more effective system where
we're getting the work done with the right outcomes together. It
>> it seems tricky to me though because I think there is just so much thrashing around in organizations right now. I
think that the amount of knowledge that any given manager or worker has about AI right now is highly variable. uh whether
their knowledge is is like keeping pace with the capabilities of the latest models. That seems like an open question
models. That seems like an open question to me. So, I have to say like in the
to me. So, I have to say like in the near term, I actually feel quite pessimistic about this. I'm sure there are going to be individual managers and teams that are like doing a great job, but at a like economywide level, I think
people are just absolutely all over the map on this.
>> Yeah, I I think so, too. And I think it's also not clear to me that people are going to feel comfortable talking to their managers about how they're feeling about AI because I think a lot of people
have these reasonably wellfounded fears that like if you tell your manager like I'm using AI to do this part of my job, the manager's first thought is well going to be, well, maybe I can lay you off right?
>> Maybe I don't need all these humans anymore. And I think we're seeing enough
anymore. And I think we're seeing enough of that happening at big companies now where they're laying off big percentages of their workforce and and sort of attributing that to productivity gains from AI that I think people are are sort
of feeling like well if I discover how to use AI for my work I'm going to keep it to my damn self. Absolutely. or or
Kevin, I think we also see the reverse of that, which is you go on social media and you see people bragging about the insane lengths that they are going to to be using AI at all times to have their,
you know, cloud swarms up and running and coding, you know, while they sleep.
And I I feel this sort of deep insecurity embedded in that, which is if I'm not out there constantly telling you how much AI I'm using, you know, I might sort of be next on the chopping block.
My reaction to that is this is why leaders play a really important role because I think Kevin your point is well taken. I I think there are things
taken. I I think there are things individuals can do. There are absolutely things managers can do but this is about systemic redesign of work. So Casey to your point like I don't think AI brain
fry is going away unless we tackle it headon. Like I don't think this is
headon. Like I don't think this is something that we can sort of just democratize and let everybody figure it out. Although I think there are things
out. Although I think there are things they can do to to mitigate. Um, but I'm really interested in actually like, okay, let's rethink how we get the job done.
>> Like, you know, we are really bad at stopping work.
>> Is all work valuable? Like, if we had leaders engage more meaningfully in these questions, that's the work we need to do if we really want to address some of this.
>> Julie, I'm wondering how much you went back and looked through sort of historical precedent here. I when I was researching my my last book, um I was
doing a lot of reading about the 1970s when a bunch of manufacturing workplaces like auto plants were getting all these new automated robots to help them do
things like assemble cars. And there was this whole sort of nationwide uh panic about this. Uh they they called it Lordstown syndrome because the first
sort of GM plant to have this level of automation was in Lordstown, Ohio. And
you know, Congress held hearings about this like sort of new wave of worker alienation that was happening in these bluecollar manufacturing workplaces for a lot of the same reasons that that to
me seem like they rhyme with at least this AI brain fry idea. Workers were
just saying basically like I don't feel like a human anymore. I feel like I just push buttons and the robots do all the work. I don't talk to people at the
work. I don't talk to people at the office anymore. My managers have all
office anymore. My managers have all these crazy productivity expectations of me. And I think what was interesting in
me. And I think what was interesting in that beyond just the the parallels to what people are feeling in white collar workplaces today was that the way that they sort of got out of that was through
striking and through organizing and unionizing and getting a bigger share of the the profits that these companies were making from all this productivity.
So I guess I'm just wondering if you could riff on the maybe the some of the historical parallels before and where this may all be heading.
>> Well, I always get the question around um Excel and accountants, right? like
did the rise of Excel lead to more or fewer accountants? Um or even if you
fewer accountants? Um or even if you think back actually to the industrial revolution, one thing I actually think is a really interesting parallel there is, you know, the rise of technology at
that time. In many cases, it wasn't
that time. In many cases, it wasn't until there was actually a rearchitecture of the shop floor did we actually see the productivity gains. And
to me, that's an interesting parallel to what we need to do with redesigning work. Julie, one one of the questions I
work. Julie, one one of the questions I wanted to ask you was like, you know, it is the role of the consultant to come in and say, I have talked to people all across this land and I understand the
best practices and I will bring them to you and you can redesign your shop floor so that you can get back to being maximally productive. But I feel for
maximally productive. But I feel for Kevin and I, we feel like the ground never stops shifting under our feet anymore. and that every few weeks some
anymore. and that every few weeks some new model comes along where the level of capability goes up and and maybe even something that I would not have been
able to do in November I actually can now and before too long maybe that's going to be a core expectation for me that is part of my job. So part of me wonders like is this actually even a
good time to be redesigning your workflows if you know three months from now six months from now the landscape might have uh completely changed all over again.
>> Yes. And I have tackled this question many many times. Here's my take.
For companies who didn't do anything two years ago, they would have said the exact same thing to me, Casey.
>> They would have said the tech is going to change. I'm going to wait. I want to
to change. I'm going to wait. I want to be a fast follower.
>> And honestly, there is some smart truth to that, right? Like pick your bets.
Like I definitely wouldn't be doing this everywhere.
>> But I think this is about learning a new capability and muscle as an organization. M
organization. M >> like this is about teaching us how to change.
>> So I would say like if you're not if you're on the sidelines, yes, it's it's it's just going to keep moving. So you
could have that excuse, you know, a year ago, two years ago, you know, two more years. Um but you're also going to be
years. Um but you're also going to be missing out on that opportunity to build capability as leaders, to build that in your teams, to start upskilling people.
I think there's actual things that you can do to support your talent to go on this journey with you.
>> Yeah. And I would say like also if I could add something to that from 1972 uh which is apparently where I love going on this subject. There was there was this sort of team at GM when the
Lordstown syndrome was taking over um that had to figure out how to bring back the striking workers. And one thing they did was that they set up these new humanization councils where basically
workers, people from the assembly line were invited to give their thoughts on how the robots were being used and how the machines were set up and how the the assembly lines were laid out and feeling
like they had some input and some control over their situation and were not just like passive bystanders actually seem to help. So, I don't know whether that's directly applicable to white collar workplaces that are going
through this today, but I do think that having some of the the energy and and ideas come from uh the quote unquote bottom uh from the from the actual workers doing the the individual
contributions seems to matter.
>> Yeah, I mean Kevin, that's absolutely right. Like, how do we have more agency
right. Like, how do we have more agency in this? And if you do that, you're
in this? And if you do that, you're going to be really user centric. You're
going to think about like what work do people enjoy doing? What work do they not enjoy doing? What are some of the barriers cognitive or otherwise to getting actually that work done? I think
that it's exactly right.
>> Well, Juliet, thank you so much for giving us a lesson. Now, if you'll excuse us, we have to go deal with our AI brain fry.
>> I actually have AI brain freeze. It
happens if you uh use chat GPT while you're drinking a slurpee.
>> Well, as long as it's not AI brain rot.
>> Yeah, fine.
>> Yeah. Oh, we we got there a long time ago. Thanks, Julie.
ago. Thanks, Julie.
>> Thanks, Julie. Thank you so much. This
was great.
Well, Casey, I heard you got an exciting new job last week. I did, and it was the sort of job, Kevin, that I didn't even know that I had or was doing.
>> So, you had this crazy experience by being selected against your will and without your permission as one of Grammarly, the AI kind of
writing assistant. uh they have an
writing assistant. uh they have an expert network of people whose voices they have borrowed for the purposes of I guess making people's writing better. Um
so a congratulations.
Um I assume the royalty text checks are just overflowing your mailbox. Um but
what actually happened here? You had a fascinating newsletter about this this week.
>> Well, thank you. So this story uh I first learned about from The Verge. Uh
their reporter Stevie Bonofield wrote about this and it turned out that last summer Grammarly had added this feature called expert review. I had not actually used Grammarly until this. Have you ever
used it?
>> No.
>> So uh I decided, you know what? Why
don't I sign up for the free trial and see what uh what Grammarly can do for me? And uh as it turns out, if you go to
me? And uh as it turns out, if you go to the support page for this feature, it says that expert review quote is designed to take your writing to the next level with insights from leading
professionals, authors, and subject matter experts. That sounds pretty cool,
matter experts. That sounds pretty cool, right? Well, scroll a little further
right? Well, scroll a little further down, Kevin, and you see the following disclaimer. References to experts in
disclaimer. References to experts in expert review are forformational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or
entities. And so I read that and I
entities. And so I read that and I thought when you say that these insights come from leading professionals, what does the word from mean to you?
Because it sounds like what you're telling me is they don't come from those experts at all.
>> Yeah. It's like when you see like a tub of margarine and it's like, you know, it's like butter style product in very small type.
>> Yeah. They have sort of an expert network with an asterisk. None of the experts were actually consulted and we didn't actually hear from them in any way.
>> Absolutely. So Stevie over at The Verge put a bunch of writing through expert review to see what sort of expert names would pop up. I was one of them.
Congratulations. Thank you. Um, you
know, as you might imagine, Gremly also picked a bunch of like actual famous people. So, Stephen King, Neil deGrasse
people. So, Stephen King, Neil deGrasse Tyson, Carl Sean, and I decided to put this thing through my own paces and uh loaded up some recent columns that we
published in Platformer and pasted them in to see what sort of experts it would suggest. And while I was never able to
suggest. And while I was never able to get my own name, Kevin, I did see a succession of people that sort of felt like if you made a list of people who
would hate this idea the most, that is who Grammarly had picked. So, Tiffet
Gabru, a very vocal critic of AI systems, the way they are built and deployed, she showed up as a quote unquote expert. So did Julia Anguin, who
unquote expert. So did Julia Anguin, who is an investigative reporter. uh she
writes for New York Times opinion and it used her um writing even though she has written a lot about how tech systems are used for privacy and surveillance in
ways that are sort of like contrary to how we want them to be used. Julia, by
the way, filed a class action complaint against Grammarly's parent company on Wednesday, seeking to stop them from quote trading on her name and those of hundreds of other journalists, authors,
and editors, and to stop them from quote attributing words to them that they never uttered and advice that they never gave. Wait, can I ask a question about
gave. Wait, can I ask a question about the mechanics of this? Okay. So, you're
writing in Grammarly, which I I gather is sort of like a a bolt-on to like a word processor. Yes.
word processor. Yes.
>> And it sort of detects the topic you're writing about and then pops up a little like clippy thing that's like, would you like Julia Anguin to edit this for you?
Would you like Casey Newton to give this one a pass? Exactly. I'll actually show you an example here if you want to look at my laptop. Um, you can see that here is the text that I wrote. And then in
this little lefth hand column, in this case, it just says Cara Swisser. Cara
Swisser, my good friend, past hard forecast, legendary Silicon Valley journalist and podcaster and someone who has absolutely no involvement with Grammarly, but her name just sort of
pops up there with no disclaimer at all.
Right. And then when you sort of click in, um, it will offer this sort of car inspired advice. And this is the point,
inspired advice. And this is the point, Kevin, where I would like to talk about the kind of advice that this thing actually gives.
>> Please.
>> So, you might expect given that they were, you know, allegedly trying to borrow the expertise of real humans that that expertise would seem like incredibly specific to that person,
right? Instead, what you're getting is
right? Instead, what you're getting is just a bunch of very generic advice about something that you might do. So I
I noted for example that you know we uh we pub my my colleague Ella Marianos wrote a story in platformer last week where she went to a protest at OpenAI and there was a suggestion that uh
Grammarly had said was inspired by John Keroo, the legendary investigative journalist who brought down Therronos and the advice basically boiled down to
try opening with a colorful scene and use a lot of rich details and characters, right? like sort of the most
characters, right? like sort of the most absolute generic advice that you would ever imagine getting and nothing like I would imagine the actual experience of sitting down with John Keroo and saying
like, "Hey, how did you write bad blood?"
blood?" >> Yeah. How did it say that Caris Swisser
>> Yeah. How did it say that Caris Swisser would edit a story?
>> So, I will just read you the piece of advice uh that it gave me. Uh this was also a piece of advice uh about this protest story. Um the uh the fake AI
protest story. Um the uh the fake AI Cara said, "Could you briefly compare how daily AI users versus AI skeptics articulate risk creating a through line
readers can follow? A synthesizing
sentence here may tighten the narrative arc." That I'm I'm laughing because that
arc." That I'm I'm laughing because that is the exact opposite of how I imagined Cara Swisser would edit someone. It
would just be like a string of like four-letter words and like, you know, this sucks. Do it over again.
this sucks. Do it over again.
>> Yeah. It would say, "Stop wasting my time." you know, like that that would be
time." you know, like that that would be the advice. The I the thing that I just
the advice. The I the thing that I just read, I just want to acknowledge like it is word salad. Do you know what I mean?
Like you can tell what I don't know what underlying model they're using here. I'm
guessing it is not a frontier one, right? It's reading very like GPT2 to
right? It's reading very like GPT2 to me, you know?
>> So, this advice is so bad, but let's bring this into what I actually find upsetting about this Kevin.
>> Yeah, let's make this about you.
>> No. Well, here's the thing. I'm actually
not going to make it about me because I have sort of just long since accepted that all of these companies are have stolen all my intellectual property and are having their way with it. Where I
really feel bad is for the subscribers to Grammarly. These people are paying
to Grammarly. These people are paying $144 a year to be able to use this glorified spell checker. Okay? And they load this
spell checker. Okay? And they load this thing up and then Grammarly gives them this service. And so if you are a paid
this service. And so if you are a paid subscriber to Grammarly, you are paying a subscription to get Grammarly to hallucinate on your behalf, right? To
make up a bunch of stuff that is not true, right? This is not the actual sort
true, right? This is not the actual sort of advice that any of these experts would provide and you are paying for that service when you just as easily could have taken whatever text you had
written and pasted into a free chatbot and gotten generic advice that is just as not great as what you were getting here.
>> Right. And the the truly crazy thing about this is that despite charging all this money for people to use this substandard AI product, they are not to my knowledge passing any of this along
to you or Cara or John Keroo or any of these authors whose identities they have perlloined for the purposes of selling this product.
>> No, they're not. And you know, look, I think that all of the AI companies just have a huge entitlement problem in general. You know, I think that they
general. You know, I think that they think, look, if it's the if it's on the internet, it it is in the public domain and it belongs to us. And they don't spend enough time thinking about how
they are destroying the incentives for anyone to create a public open internet, right? If you feel like you're just
right? If you feel like you're just going to get screwed in this way. Um, so
I do think that that is really unfortunate.
>> Yeah. So, what did Grammarly say when you started writing about this? Well,
when I reached out to them, uh, they thought about it for a while and then finally came back to me in on Monday and said, "You know what? We've thought
about it and if you're one of our experts, um, who we didn't consult and we're not paying, um, you can now opt out of this feature."
>> How nice of them.
>> So, you can now send an email and say, "I don't want to be a part of this system anymore." And uh so you know I
system anymore." And uh so you know I wrote the story and got a lot of comments on social media like you know gez that really seems like uh the least they can do. But Kevin as we record this I actually have some breaking news.
What's that?
>> So I got an email from the spokeswoman over at uh Superhum today. Superhum is
what Grammarly now calls itself. They
did a rebrand last year and they're now sort of a bundle of mediocre products.
and they sent me a note and said that uh after careful consideration, we have decided to disable expert review as we reimagine the feature to make it more
useful for users while giving experts real control over how they want to be represented or not represented at all.
Uh dot dot dot dot dot dot dot. Thanks
for holding us accountable. We're
committed to getting it right next time and we'll be transparent about how we improve from here.
>> Results. Newton gets results.
>> Newton getting some results this week. I
mean, look, it's clear to me that they are embarrassed about this. But this is one where the whole time I was using this thing, I was like, who was the product manager? What were the meetings?
product manager? What were the meetings?
>> Imagine the meetings.
>> IMAGINE WAS THERE A LAWYER involved in that? Who was the lawyer that signed off
that? Who was the lawyer that signed off and said, "Yes, feel free to misrepresent that you are getting inspiration from all of these different editors." So, the thing is such a like
editors." So, the thing is such a like spectacular misfire and it really made me wonder like what is the future of a product like Grammarly and like that's kind of where I want to end this. You
just finished writing a book. You
presumably could have used some sort of AI writing assistance. Did it ever occur to you to use Grammarly?
>> No.
>> Why not?
>> Because I don't know anything about it and I don't need it and I have other tools. Well, so talk to me about these
tools. Well, so talk to me about these other tools because this is what I think the real story is, which is like in 2009 when Grammarly launched, you didn't have a lot of options for writing assistance, right? You had like whatever spell
right? You had like whatever spell checker was in Google Docs and like that was, you know, probably going to be the best tool available. Fast forward to today though, you got Chat GBT, you got Gemini, you got Claude, there are free
versions of these services. If you want a quick grammar check, you can get it.
My guess is that's that's the experience that you just had.
>> Yeah. If I want a grammar check, I'm just copying and pasting into one of the AI models. I'm not using like a
AI models. I'm not using like a purpose-built thing for that or it's now built into, you know, Google Docs.
>> Yeah. And and to to, you know, emphasize a point, when you're using Claude, as you did in your book, you're using the latest and greatest version of Claude.
If you are using some sort of startup that has that that is like using the API of of Anthropic, they're not actually incentivized to give you the frontier model most of the time, right? Because
that's going to be very expensive. So
they're going to give you a model that's a couple generations old because they can get a lower price and their their margin is going to be better on it. So
we've talked a lot in recent weeks about the potential for a SAS apocalypse where these companies that are selling these sort of, you know, businessy proumer services are going to get crushed by the
fact that there is now just a cheaper way to do it. I wonder if you think the Grammarly might be one of those.
>> No, I think it's going to be part of the ass apocalypse, which is for software that absolutely sucks. that there's no reason to be using in the first place.
And I think that that uh that software has a a hard road ahead.
>> I just do not think there is a future for this product. And like like when I saw this, yes, I did have the moment of like outrage is too strong a word. I
felt supremely annoyed. Okay. I did feel like very annoyed that this was happening. But again, it's like I know
happening. But again, it's like I know all these companies have like all read my stuff. You know, you could go into
my stuff. You know, you could go into Claude today and say, "Give me um uh editing on this piece in the in the style of Casey, like draw inspiration from Casey Newton and edit my piece."
Claude is not going to refuse and say, "I don't have the rights to his intellectual property." It's just going
intellectual property." It's just going to do it and it's not going to notify me and it's not going to pay me. Right? So,
I do think that there is a distinction between what these companies are doing, but I I just want to point out that in some way like the violation is the same.
The bigger thing to me was this really feels like desperation.
>> And I think that more and more of these consumer sort of internet uh services that have been able to get away by offering a pretty subpar product and
selling it to you for more than $100 a year. I think the rude awakening is
year. I think the rude awakening is showing up. you know, where all of a
showing up. you know, where all of a sudden if you have a subscription to your child, your Claude or your Gemini or your Chad GPT, you're probably going to be able to get
more from that and do more things and you're just not going to need the subscription anymore. It's it's it's
subscription anymore. It's it's it's exactly like when we were talking about vibe coding and being like, why are we paying Squarespace all this money?
Right. I think the why are we paying Grammarly all this money moment is coming.
>> Yeah. And I I should say if you want to rip off Casey Newton's editing style without his permission or without compensating him, you should just do that in a free chatbot. His advice is
not worth that much.
Trust me, I have seen his edits and I would not pay $140 a year for them. I'm
a great editor. Okay, ask around. You
should really ask around. I have some very detailed, thoughtful, thoughtful feedback. Uh but
feedback. Uh but >> no, this this is horrible. I'm very glad you exposed it. I'm very glad they went back and said, "We're not going to do this anymore." But I think this kind of
this anymore." But I think this kind of thing is going to keep happening unfortunately because there is money to be made and if you can get away with it, you're going to do it.
>> Yeah. You know, an interesting question might be like, is there a good version of this feature? And what would that be?
>> Do you think so? if they had come to you and said, "Uh, hey Casey, we're starting this new expert, uh, review feature, and every time someone edits their, uh,
emails to sound more like Casey Newton, we're going to give you 10 cents. Would
you have done that?" I mean, I I don't know. In general, I am in favor of AI
know. In general, I am in favor of AI companies trying to strike strike deals with creative people that say like we are going to give you some sort of you know we're going to essentially share
the revenue that is based on the the creative work that you have done. So
certainly I would like to see some kind of explorations like that. But, you
know, I think about some of the editing I've done. You know, I can remember like
I've done. You know, I can remember like working with one writer once and she was working like on a kind of narrative feature story and it just made me think of Katherine Buu, the great features writer for the New Yorker for a long
time. Wrote this incredible book behind
time. Wrote this incredible book behind the beautiful Forevers. And I was like, go read Katherine Buu. like go read Katherine Bu pieces in the New Yorker and see how she like evokes characters
and see how she kind of structures her narratives and like so can I imagine an AI tool that like you were having a conversation with that also said like you need to read some Katherine Buu and
click here and hey if you ever have if you already have a New Yorker subscription maybe you can log in right here and we'll sort of bring up some of the relevant passages. So yes, I do think that there is value in in sort of
guiding writers to actual experts. The
key is you have to guide them to the actual expertise, not just what your three-year-old LLM is hallucinating.
Right? That would be my worry if they had come to me, which they did not. Um,
which I'm a little bit offended by, frankly. They put you in the feature.
frankly. They put you in the feature.
Not I was I not worth ripping off? I'm
right here grammarly. I'm pretty good.
Um, but no, had they come to me and said, "Hey, we want to make you part of this." I would have said, "Well, how
this." I would have said, "Well, how good is your model?" Because I, you know, my worry about something like that would be that someone would, you know, open up their their word processor and start writing their business memo and
say, "Make it sound more like Kevin Ruse." And then it would make it sound
Ruse." And then it would make it sound terrible and generic and people would blame me and I would kind of get the bad rap for allowing my reputation to be laundered in this way. I was texting
with my friend Matt Honen who's the editor of the MIT technology review and he found that he was also being used as an expert and when he clicked on the
expertise that he was allegedly providing the user of expert review he looked to see what source they were citing and it was um a speaker bio that
he had submitted from an event. So like
based on Matt's speaker bio, you should have used to work at Wired. Like I I don't even know what it said. But again,
it's just like >> that they just did not think this.
>> Yeah.
>> Well, now as a result of Grammarly pulling back this feature request, if you want your emails to sound like Casey Newton, you're going to just have to put a bunch of typos and random punctuation in yourself manually.
>> And if you really want to know what I think of your writing, it's that you should start a podcast. That's where the future's going.
>> Okay, Casey, I'm glad you escaped Grammarly servitude.
Loading video analysis...