Ask the Economist: Is A.I. Really Coming for Your Job?
By Hard Fork
Summary
Topics Covered
- Markets React to AI Sci-Fi Essays
- No Hard Data on AI Impacts Yet
- AI Enables Double-Digit GDP Growth
- AI Will Substitute Human Labor
- Recursive Self-Improvement Drives Hypergrowth
Full Transcript
Casey, how's it going? Going well.
Kevin, how are you doing? You had some big news over the weekend, my friend. I
did. I did. Uh, we're going to have to update the disclosure. Yes. Why is that?
>> For the past uh year or so on this show, uh, I've been disclosing that my boyfriend works at Anthropic. Um, but
we're not going to say that anymore because I don't have a boyfriend. I have
a fiance.
>> So, yay. Yeah.
>> That's so exciting. They say that getting married is the second most serious kind of relationship you can get into with a man besides starting a podcast with him.
>> So, uh, we'll see how it goes, but I'm very optimistic.
>> Have you decided on a theme for your wedding yet?
>> Wow, that's great. You know, I I have to admit we're we're at the very earliest stages of the planning, and so if you have any, you know, ideas, I'm very open to that.
>> I did start brainstorming possible wedding hashtags, you know, because every every couple needs one of those.
>> Absolutely.
>> So, how about these?
>> Okay. AGI do.
Uh, any others?
>> Uh, say yes to the press.
>> Now, that that one I like. That one I like. That was good.
like. That was good.
>> Or, of course, the classic, my husband works at Anthropology.
>> Well, Kevin, another week, another viral essay predicting AI caused doom roing the stock market. What is going on?
Yeah. So, the big news from this week was that an essay written by a research firm called Citrini Research went viral this week. The essay is called the 2028
this week. The essay is called the 2028 global intelligence crisis. And it
basically sketched out a near future in which the AI industry eats not only the labor market but also the business models of a number of leading companies.
There were lots of examples. It's a very long essay, but basically this was one firm's attempt to say, "Here's what the next few years could look like if AI
progress continues." And what this firm
progress continues." And what this firm says it will look like is pretty bad, Kevin. Right. The suggestion here is
Kevin. Right. The suggestion here is that AI agents improve and take over the economy. And so as a result, you're
economy. And so as a result, you're going to see massive job losses, like a huge contraction in the stock market and a lot of individual companies that it
named in the piece like Door Dash was a big one. This essay predicts these
big one. This essay predicts these companies are going to have a really, really hard time. Yeah. And I was not that impressed by the Catrini research essay. I thought it made a number of
essay. I thought it made a number of logical jumps that I wouldn't make. Um,
but it had a big impact. Um, people are blaming this essay for triggering a massive Wall Street sell-off. Companies
like Door Dash, American Express, and Blackstone, all of their stock prices dropped more than 8% immediately after this essay was published. So, ne, we are now in the era of marketmoving science
fiction where anyone with a an opinionated and reasonably informed take on what AI is doing to the economy can now trigger billions of dollars in losses in the stock market if their essay kind of catches fire, as this one
did. That's right, Kevin. And that's why
did. That's right, Kevin. And that's why I'm calling on all science fiction authors to register with the Securities and Exchange Commission. Your ideas are too powerful and they must be regulated.
>> Yes. So, we're not going to spend this whole episode talking about this one essay because I think it is symptomatic of something larger and more interesting that is happening right now, which is that economic uncertainty about where
all of this is headed, where AI is going, what effects that's going to have on the labor market, on the productivity gains from companies that are implementing it, on the business models
of some of our largest companies. It all
feels really uncertain and tenuous right now. And I thought instead of just going
now. And I thought instead of just going line by line through this essay, we should actually bring in someone who knows the economy and has been thinking about this stuff for far longer than we have.
>> Yes. As much as we would like to share with you what we remember from freshman year macroeconomics, we thought this may be a time to call in the big guns.
>> Yeah. So today we are bringing you a conversation with Anton Corinick. Anton
is a professor in the department of economics and the Darden School of Business at the University of Virginia.
He's also since last April a member of Anthropics Economic Advisory Council and I've been really excited to get him on the show for a long time. I have been a fan of his work and I would say he's
been at the forefront of economists who are trying to work out what effect AI will have on the economy. He did not come to this question recently. He's
been working on this for more than a decade and he has become well known as someone who is willing to consider maybe somewhat more extreme scenarios than many of his colleagues in economics and
for that reason I think he's really interesting and and look Kevin I think we all want very simple clear answers right now to exactly what is going on exactly when might massive job loss
begin and the truth is we don't know right we do not have the data we don't understand the today's capability abilities well enough, much less tomorrow's capabilities. And so, we
tomorrow's capabilities. And so, we cannot give you one clear answer on everything is about to happen. But I
think the mere fact that the markets can move so much based on almost nothing underscores how high anxiety is right now. And so, I think it's helpful to
now. And so, I think it's helpful to just talk to someone who follows this stuff very closely and is able to tell us in no uncertain terms what we know and what we don't know. So, let's bring him in. And before we do that, you
him in. And before we do that, you already made your updated disclosure this week that your fiance works at Anthropic. Um, and I will make mine,
Anthropic. Um, and I will make mine, which is that I work at the New York Times, which is suing OpenAI and Microsoft and Perplexity over alleged copyright violations. All right, let's
copyright violations. All right, let's bring in Anton Cornick. Anton Cornick,
welcome to Hard Fork.
>> Great to be on air with you.
>> So, I am very excited to have this conversation with you. You're a guest I've been wanting to bring on the show for a long time. And we are finding you at a moment where the entire economy seems to be resting on these kind of
loadbearing essays, these works of, you know, extrapolation or science fiction, whatever you want to call them. This
week, we had this Citrini research report about the 2028 global intelligence crisis. Before that, it was
intelligence crisis. Before that, it was uh it was another essay. So I'm I'm very curious what you, an economist who's been looking at the issue of AI for many
years now, makes of this moment where markets seem so reactive to even small changes in perception.
Yeah, you know, it's a funny moment because I have been studying this for a decade now and I have been kind of waiting and waiting to when markets are
going to wake up to what's about to hit us and then it's kind of seemingly small almost random little things that
actually produce big market reactions.
So yeah, you know, markets uh move according to emotions and I guess this is one of those instances, but in the background there are also some very real
developments and I guess we're here to discuss those today.
>> Yeah, that's right. That we're hoping that today we can maybe drain a little bit of the emotion out of the conversation and get into the cold hard facts. So Anton, what can you tell us
facts. So Anton, what can you tell us about what the current economic data tells us about this moment? What is
actually happening? Is there data that suggests something is really shifting or is this still sort of more in the realm of vibes?
>> It's still in the in the realm of expectations. So if you look at the
expectations. So if you look at the actual data, you can see some relatively small impacts of AI on things like the job market, things like productivity
growth, but they're still first of all in the territory where there are very small like uh fractions of a percent and secondly still contested. So at this
point there are like a couple of economic research papers that say yes, we can see something in the job market for entrylevel jobs. Um but there are also people who still say well there's
this and that that's wrong in this paper and we could actually interpret these results in a different light. So in
short there is no really hard economic data yet. I'm actually afraid that even
data yet. I'm actually afraid that even by the time when all of us are going to see yes this is clearly visible now the economic research is still going to be
slightly contentious.
>> And why is that? Is that because it just takes a while to collect all the data for these things to start showing up in productivity numbers? Is it the lag or
productivity numbers? Is it the lag or is there something about the way that AI is transforming the economy that is not able to be captured in the kinds of
economic data we collect?
>> I think it's a little bit of both. So
our economic statistics they are designed in part to be very very comprehensive and it takes time to compile them. they get revised because
compile them. they get revised because the first take is not necessarily the final one. So if you look at things like
final one. So if you look at things like productivity, uh that's where the time lags really hit you and where you really have to just live with the fact that we
won't have a fully clear picture until like a year after the data has actually materialized. But the second thing is
materialized. But the second thing is also that the technology is advancing so rapidly. the chart GPD that you work
rapidly. the chart GPD that you work with today is very different from the one a year ago and can do much more especially when it comes to things like
coding or white collar work.
>> So let's dig into one of these pieces of research. There was a paper at the
research. There was a paper at the National Bureau of Economic Research from earlier this month called Firm Data on AI. They surveyed 6,000 executives.
on AI. They surveyed 6,000 executives.
It found that 70% of their companies used AI but uh that 80% of the firms reported that they had seen no impact on employment or productivity. I feel like
we see these kinds of surveys a lot that's like you know uh this technology is being widely deployed. We can't tell if it's doing anything. How do you as someone who does believe that AI will
eventually transform the economy make sense of this kind of research?
Yeah, I think there's a very big gap between the frontier of what's possible and what is actually used in daily use.
And what the paper that you just mentioned tells us is that uh in the field when it comes to how actual corporations are using these technologies as of a couple months ago,
there wasn't really that big of an impact yet. And I think that corresponds
impact yet. And I think that corresponds to everything I'm seeing and hearing when I talk to executives. So people are still at the stage of where they're trying to figure out how do we actually
deploy these systems productively. How
do we go from let's say the shiny demo to having a productive impact on our work where we can do more where we can do things more cheaply uh and in a
reliable way with the same level of reliability that we've always worked.
>> One of the concepts in this 2028 global intelligence essay that got a lot of attention was something that the authors called ghost GDP. this idea that as AI
kind of gets more capable and does more work that we will have these uh increasingly productive firms creating increasing amounts of revenue and GDP but that that will not be sort of
showing up in the pockets of workers because machines are doing the work.
Does that track with any of the research you've been doing? Is this a real concept this ghost GDP that we should be worried about?
>> I'm worried about it. Sounds very
spooky.
It's definitely a spookier term than what I have encountered this under but um frankly you know it does track very
much with what the general expectation is if the technology reaches the level of something like EGI or powerful AI or
whatever you want to call it. So in some sense you can say it's even worse than that. So on the one hand there's going
that. So on the one hand there's going to be a lot of GDP that is not going to be produced by humans in the loop. Uh so
that means no worker is ever going to get get like the benefits of that. But
on the other hand there's also going to be quite a significant amount of economic production that doesn't even show up in GDP because it gets counted as an intermediate good. things only
show up in GDP when it is final consumption or final investment in things in capital that we can accumulate that has a useful life of a certain
period and a lot of the parts of the AI economy are not going to be reflected in GDP.
>> I'm curious there's sort of this debate going on among economists that I talk to. Some of them will say, you know, we
to. Some of them will say, you know, we just don't ever see really instances of the economy growing as quickly as some of the people in Silicon Valley think it might. You know, 10 20% GDP growth.
might. You know, 10 20% GDP growth.
That's just like unprecedented in our history. And so they're expecting that
history. And so they're expecting that AI will make things grow much more slowly, maybe a percent or two a year, which would be big in in relative terms, but not the kind of hyperrowth scenario
that some people out here in the Bay Area are envisioning. Then you have people like the folks at Catreaty Research saying uh this we're about to see something we've never seen before.
We're about to see an entire economy sort of becoming unmed from any of these cyclical patterns. So where on that
cyclical patterns. So where on that spectrum do you fall? Where does the data lead you between the sort of slow growth 1% or 2% a year to the 10 or 20%
a year hyperrowth scenario?
>> Yeah, I'll see two things about that.
The first one is that the story has not been written yet. And there is a possibility that if we develop this technology in a
really irresponsible way that we could actually see some uh self-rerodction that takes off and that leads to tripledigit GDP growth numbers if
measured from the eyes of the AI.
But um if we deploy the technology in a way that it makes the average person better off then I think tripledigit growth numbers are completely
unrealistic. They would lead to way too
unrealistic. They would lead to way too much disruption and then um I'm not quite sure. I think just 1% is
quite sure. I think just 1% is definitely going to be uh too low uh to be realistic from my perspective. Um in
really optimistic scenarios, I think we could get to low double-digit growth rates. Um and I should say that
rates. Um and I should say that presupposes not just the cognitive AI uh but full AI in the way that is for
example defined in the charter of open AI where they say systems uh that are highly autonomous and that can perform most economically valuable work. So that
also includes a physical component that includes the robotics part. Otherwise,
it won't have that big of an effect of GDP because the majority of the economy isn't just uh sitting in front of a computer, >> right? And I think a lot of people right
>> right? And I think a lot of people right now who are looking at the stock market and these viral essays and trying to make sense of this all are feeling a lot of um cognitive dissonance because on
one hand we have people who seem very smart saying AI is transforming everything. Every company is doing
everything. Every company is doing things differently than it was a couple months ago. uh we are headed into
months ago. uh we are headed into uncharted territory and then you look around and we're you know we're still below 5% unemployment. We still don't see a huge productivity boost. most
people uh who are using this stuff at work are uh only using you know older models or their IT department won't let them use the agentic coding stuff and and so it it does seem like we are
seeing a a growing disconnect between what people who are looking at the technology are saying is going to happen and the observable reality around us. So
what do you make of that disconnect and how should people be feeling about these projections of rapid change?
Yeah. So the one part that we already touched upon is the gap between the frontier capabilities and the actual implementation. That part is real and
implementation. That part is real and that is very significant. That's also
something that is kind of bound to disappear over time, right? Um but the second part is that ultimately all the projections that we are hearing are
extrapolations and people react very differently when they see how much the AI systems have improved let's say over the past year.
uh some people just naturally jump to the conclusion well let's extrapolate this and of course these systems are going to be way smarter than any human
within a just a small number of years and then there is another camp that says well um what our brains are doing is so
special that um machines won't be able to replicate it for a very long time and these machines they're going to kind of asmtote to somewhere below our brains capabilities
and frankly both is a speculative position.
I personally I'm first of all willing to embrace the uncertainty about it and I think we all should but if you ask me to like
make one guess uh that I feel more comfortable about I would say capabilities are probably going to continue to increase and I don't think
there is any clear limit uh in front of us in the near term and so I do expect that there's going to be very significant economic impacts.
>> Yeah. So, let's extrapolate a little bit uh further into the future. In uh 2017, you co-wrote a paper where you suggested that quote, "Progress in AI is more
likely to substitute for human labor or even to replace workers outright than it is to be complimentary for most jobs."
At the time, you were way out on a limb when you wrote that. Uh I imag I I imagine you feel that today uh more than ever. Um, but what is giving you that
ever. Um, but what is giving you that confidence? And to what degree do you
confidence? And to what degree do you feel like we we've started to see it um maybe uh feel more true than it did in 2017?
Yeah. And just to be sure, that was always meant to be a prediction about AI systems. They're essentially at the level of AGI or beyond, not for the
literal systems we had in 2017 that could barely tell apart a dog and a muffin.
So um I think ultimately where my my perspective is coming from is that I
have studied uh neuroscience. Uh I have studied uh computer science. uh and at
some level uh you know once basically deep neural networks became powerful I felt it is hard to not make the
conclusion that well it looks like eventually these systems will be able to do pretty much anything that our brains can do and they're subject to much much
more relaxed constraints like they don't need to fit into a tiny human skull. we
can scale them almost without bounds and in some sense that's what we have seen over the past decade right we've seen lots and lots and lots of scaling um at
this point these systems uh consume uh the energy of like cities uh as opposed to what our brain does which is the
energy of like an energy efficient light bulb and that's still not the limit there still increasing in size increasing in capab capabilities and of
course the algorithms are getting better and better. So based on that
and better. So based on that perspective, I just don't see why there would be any natural limit and certainly not why there would be a limit that's below our human intellectual
capabilities.
>> Right? And and I think the question then is as this world arrives, what happens to the jobs? And uh in economics, some of our listeners may not have familiarized themselves yet with uh
what's called the lump of labor fallacy, right? The idea that there are a fixed
right? The idea that there are a fixed number of jobs to be done and any job lost to automation will therefore never be replaced. We call it a fallacy
be replaced. We call it a fallacy because ever since economists started tracking it, automation has always led to the creation of more jobs. Um, Anton,
you mentioned in another interview that it's hard for economists to pivot on this because they've fought this fallacy for so long. What does it feel like to be an economist saying actually this
time people should worry that the jobs are going away for real?
Yeah, it does feel very strange and I have gotten a fair amount of flak from my fellow economists over the past decade. Although I'll say over the past
decade. Although I'll say over the past year or two or so, uh many of my colleagues have said, well, um I still don't entirely buy your worldview, but
you know, I'm glad somebody is thinking about it and I wouldn't rule it entirely out. Uh yeah it is a fallacy that
out. Uh yeah it is a fallacy that whenever a job is lost in the economy that person is going to be remain
unemployed forever. But I think what we
unemployed forever. But I think what we really want to look at is overall demand for human labor. And if that demand
curve shifts downwards because AI systems can supplant more and more of it, then what that's ultimately going to imply is that either the quantity of
jobs or the wage levels or both may contract. M now I should say there's
contract. M now I should say there's also the possibility that labor continues to do okay and it just
doesn't grow as fast as the rest of the economy. So in other words that the
economy. So in other words that the labor share of output is going to shrink but at least we are not falling behind in absolute levels. Our economic
theories tell us that whether that outcome or the one where labor uh just flat out loses uh which one of those
outcomes materializes depends in part on the speed of automation.
>> And uh you know like for all of our sakes I'm crossing my fingers and I'm hoping that um you know we will only lose out in relative terms and not in
absolute terms. But right now, uh, I don't think we have any data that can tell us with any degree of certainty which of those outcomes is going to happen.
>> And I want to return to something that you said a few questions ago, which was that you expect the gap between frontier AI capabilities and sort of workplace diffusion, how how workers are actually
using this stuff to shrink over time.
I'm not so sure about that. I've spent a lot of time talking with leaders of businesses and educational institutions and I would not say that their speed of
deployment is increasing all that much.
You know, they've got security fears and privacy fears. Lots of reasons why they
privacy fears. Lots of reasons why they don't want to just start throwing the stuff into their work. Um, so maybe help me understand why you believe that gap might shrink. I I may have expressed
might shrink. I I may have expressed myself a little bit unclearly but what I uh meant to say is that the current capabilities are eventually going to diffuse to the economy and of course by
that time I'm very much with you the actual capabilities are going to have advanced even further and you know if we are on this trajectory of skyrocketing
capabilities the gap itself may indeed go up rather than down. I think that is the most plausible outcome probably. But
what what I uh really wanted to emphasize is that the capabilities that we currently have are eventually going to diffuse and are eventually going to
have broad for now at first productivity effects because right now AI systems are still in many ways very complimentary to
workers but as soon as they reach the level where they become substitutes there's also going to be some adverse labor market I'll tell you what I want. Go. I want to
know how people are actually using AI at work because what we have these the data that we have is largely self-reports and I think some firms have exaggerated how
much they are doing with AI because they want to appear to be cutting edge and futuristic and look how transformed we are. And I think some people, especially
are. And I think some people, especially workers, are downplaying how much they're using AI because they're embarrassed about it or they it's against their company's IT policy or they're not, you know, they're not sure
they're allowed to be doing it. And so I just don't think we have very good granular data about what people are actually doing with AI at work and whether it is speeding them up or slowing them down. And if I could like
have a crystal ball, I guess I what I wouldn't need is a I wouldn't need a crystal ball. I would need like a
crystal ball. I would need like a surveillance apparatus. Kevin wants to
surveillance apparatus. Kevin wants to scan workers computers, but I just >> we do have a little bit of that. Uh both
OpenAI and Anthropic publish data on how their systems are actually used almost in real time and that gives us a bit of
a picture of where we are. Um but it tells you only so much. Can you give our listeners a sense of like are there two or three kind of core indicators or core reports that as you come out you think
okay here we go I finally get to update and see if we're getting closer to a future of you know mass job automation.
What are those things that as they come in are updating your understanding?
So the sheer level of capabilities is probably the most important one like you can follow whatever benchmarks you want or some amalgamate of benchmarks that
tells us where the AI systems were still lagging uh where they are doing already pretty amazing well and um you know one
of the kind of biggest shortcomings right now but of course from the perspective of workers that's great because it makes us more complimentary is that these systems are uh not
learning dynamically. The way that uh
learning dynamically. The way that uh the current LLMs work is they're trained once and after that uh the weights are
frozen in place and that means for a lot of work applications even if there are you know very kind of basic mistakes they have to go through the same mistake again and again and again and again
because they can learn only so much from it. Um so that's another sort of
it. Um so that's another sort of breakthrough that I'm looking for.
And you know then maybe a third chart that I'm regularly following is this meta chart that looks at how long of a
task AI can automate. And I think they usually find every seven months uh that time frame doubles. And looking at how
this is continuing is also quite helpful in understanding whether the exponential growth trajectory is intact or maybe even accelerating as it has seemed recently or whether we're anywhere near
plateauing.
>> Anton, you mentioned that when you started writing about AI and automation and potential job loss and economic transformation a decade ago, your colleagues in economics were very
skeptical. you were seen as something of
skeptical. you were seen as something of of a an outlier in your field.
>> Yep. One of my senior colleagues asked me, "Are you really sure you want to throw away your career over this?"
>> So, obviously that's no longer true. You
now have many mainstream economists looking at these issues. What are the ideas right now that you believe that put you on the fringes of your profession that many of your colleagues
disagree with? So I do have the
disagree with? So I do have the impression that taking the notion of something like artificial general intelligence really seriously is still a
fringe perspective in the economics profession. You're right that there are
profession. You're right that there are more people coming around to it, but it's still a small and increasingly loud minority.
I also believe that if we seriously reach AGI, uh that's not going to be the end, but it's going to be the beginning
of a really significant transformation of the economy. And in that respect, I'm probably even more on the fringe of where my fellow economists are. Yeah.
You've written about this possibility of hyperbolic growth. Basically, what
hyperbolic growth. Basically, what happens if we get recursive self-improvement? The AI start building
self-improvement? The AI start building better AIs. they start building robot
better AIs. they start building robot factories and basically create their own economy. Um, and you actually tried to
economy. Um, and you actually tried to model what might happen in an economy where AI reached this critical inflection point. What did you find?
inflection point. What did you find?
>> Yeah. So, the first thing uh that we found is there's going to be a whole bunch of feedback loops that will mutually reinforce each other. So, let's
say we do reach this point of recursive self-improvement on the software side.
uh AI systems that can do this are going to uh feed into the research process on the hardware side and are going to accelerate hardware research uh the
techn technological advances on that front. Uh moreover they are also going
front. Uh moreover they are also going to accelerate research in anything else where cognitive work where smart things
can be helpful. let's say for example unlocking additional cheap energy sources like fusion and so on and uh
creating better robots and all of these things feed into each other because those advances in turn help the AI
advance more and if you put it all together you can go get vastly super exponential growth in our model uh it is
hyperbolic growth uh leading to a singularity. Uh physics tells us that a
singularity. Uh physics tells us that a literal singularity can't actually happen because there's going to be some uh resource limit at some point. But
what I expect is that these feedback loops uh in the real world would lead to massive growth until some new bottleneck
that maybe we haven't quite identified yet will be reached. H I'm curious. You
know, you you're have to go in a few minutes to teach your graduate students.
H how has how has what you have studied changed what you tell your students about how they should think about their careers?
you know, a couple years ago, I've decided, well, I will just be blunt about my beliefs about this. And uh I am
telling my graduate students that I'm not 100% sure if there will still be jobs for economic researchers by the time that they graduate.
>> I am crossing my fingers for them. I
hope that there will be, but I don't think we can count on it at this point.
Um and I think all of us have to face this fundamental uncertainty about where the economy is going to be in a couple of years.
>> And how has that affected your course reviews that you get back from the grad students?
>> Um, that's a very good question. I have not done a systematic statistical analysis and there aren't enough data points to
say for sure whether AI has increased or reduced my teaching productivity.
>> Got it.
>> Yeah. Got it.
>> Speaking of productivity, I I want to ask you about this framework that I've been working on for thinking about how AI might transform the economy. Uh so
basically as I see it, there are three possible outcomes here. One is kind of the lumbering giants outcome where you have these big companies that dominate the economy and they're just too slow
and too regulated to really adopt all the new AI stuff quickly and so the economy just kind of chugs along for a while. Uh maybe growing at a percent or
while. Uh maybe growing at a percent or two a year. Um but nothing fundamentally changes. The the second option is the
changes. The the second option is the sprinting giants outcome which is where these big companies actually get their acts together and start moving really quickly. Maybe they lay off a bunch of
quickly. Maybe they lay off a bunch of people, maybe they create a bunch more new jobs, but they're much more productive and the economy 10 years from now is still dominated by the same, you know, giant companies we have today. And
then there's this sort of third option, which is the dead giants outcome, which is where you have basically every company that dominates today is going to be crushed by a competitor uh using AI
with, you know, 100th or 1,000th of the labor that they have. and that we're essentially going to see this this sort of swallowing of the the old economy by this new AI powered one. Of those
scenarios, is there one that you think is more plausible? And is that even the right way to be thinking about the possible outcomes here? I think those are interesting scenarios to think about and my best bet would be that we'll see
a mix of the second and third scenario that there's going to be some sprinting child that they're that are going to do okay given their you know incumbency advantages and that there's also going
to be some sectors where newcomers are going to uh yeah overpower the slumbering giants to use your analogies
here uh and yeah ultimately I do think that the technology will diffuse and whether that's through the existing companies or
through the newcomers that depends largely on how fast the giants are going to move. If you are a public company CEO
to move. If you are a public company CEO right now, what do you think there is to be done? Obviously, there is a lot of
be done? Obviously, there is a lot of anxiety from the market about what your company ought to be doing, but as you've told us here today, a lot of what we're doing right now is just waiting for
models to get better at various things.
So, what is the uh a rational response to to that uh dynamic from a CEO?
>> Well, the first thing is they should hire my students.
>> Absolutely.
>> Because they know really well how to use the AI.
>> Yes. Yes. But more more seriously um I think one of the most critical things is to remain up to date and to remain informed of where the frontline
capabilities are. What I see repeatedly
capabilities are. What I see repeatedly is that CEOs of large organizations are at such a high level position that
everything is fed to them by really intelligent humans and that makes them not have any reason to access the intelligent AI systems and it puts them
in some ways a little bit at a distance of what's actually happening in the field. So, you know, if they hire some
field. So, you know, if they hire some of my brilliant students who know how to use these systems really well, uh, and ask them to give them like a frontline
view of what AI can do right now, I think many of those CEOs are actually pretty amazed when they see that.
>> Yeah. And then if they follow that for a number of months uh and see how rapidly the capabilities are actually improving
then it naturally kind of leads to decisions like okay so we can see what these systems can do in simple tests.
How do we actually productively employ them in our organization? Now that gets us to the question of diffusion. It's
still a slow process right because you need to experiment you need to try out things. you need to fail if you really
things. you need to fail if you really want to push these systems uh to their limit. Um but I think it needs to be the
limit. Um but I think it needs to be the starting point uh if we want any of our decision makers to make well-informed
decisions on how to react to this rapidly advancing technology. Um,
you know, as as we wind down here, we have been talking today about how it seems like some people, particularly with the markets, are getting like worked up about what might happen without maybe knowing totally what that
is. At the same time, I also see this
is. At the same time, I also see this failure of imagination among so many folks out there who seem to believe that however good the systems are today, they just probably won't get much better or
to the extent they won't get or to the extent that they get better, it won't affect their lives very much. I wonder
how you relate to that. Do you just see that as people who sort of um uh don't want to contemplate what sort of changes might be coming to their life? Do you
think it's something else? And uh what do you think we ought to do about it? if
you believe that some of those changes might be really consequential for them.
>> So, first um I mean we all deal with lots and lots of things in our lives, right? And we have only limited
right? And we have only limited bandwidth and let's say up until a year ago I very much relate to the fact that
frankly speaking most AI systems weren't that useful for most people, right? And
so why would we spend some of our limited bandwidth on paying attention to that?
And then a second thing is probably also a kind of protective response. Uh if you want to seriously contemplate the
implications of this technology, it leads to pretty stark predictions. It
leads you to pretty stark places. And
sometimes it just feels a lot more comfortable to just live in the here and now and not worry about that not so distant future that may be quite fundamentally disrupted.
>> Yeah.
>> And yeah, the third thing is uh there in in in the public discourse uh you can hear lots and lots of opinions going in
all directions, right? I mean you are uh much more expert in that than I am and you just pick your most comforting favorite opinion out there in the public
discourse and you can get so much of supply of that.
>> Uh I just don't know if that's the best advice that you can get.
>> There's a joke circulating on social media that goes something like either AI is a bubble or everything else is a bubble. Which of those is it?
bubble. Which of those is it?
Uh if I have to pick one of the two, it would probably be everything else.
>> Well, but having said that, you know, in the economy, things always diffuse more slowly than somebody uh at the frontier
would think they do. So in that sense, let's let's take that perspective that this is going to be absolutely transformative and then add that tiny
bit of economic reality that things when they diffuse move a little bit more slowly and I think that's probably going to be roughly my median prediction of
where we are heading.
>> Well Anton, thank you so much for joining us. Fascinating conversation and
joining us. Fascinating conversation and let's keep in touch. Really appreciate
your work. Thank you, sir.
>> Thank you. I really appreciate you devoting attention to these important topics.
>> When we come back, Anthropic versus the Pentagon, Alpha School, and more in our system update segment.
Well, Casey, from time to time, we like to update our viewers and listeners about the stories that we've covered in the past that have had some new developments.
>> Yeah. We'd like to sort of check in on them uh gently without doing sort of a whole segment around them, but at least kind of keeping you up to date with what we've been keeping tabs on.
>> And we even have a name and a theme song for this segment. It's called System Update.
So, our first system update is about a story that we covered on the show last week, which has been moving very quickly. This is, of course, the battle
quickly. This is, of course, the battle going on between Anthropic and the Pentagon. As a reminder, the Pentagon
Pentagon. As a reminder, the Pentagon and Anthropic have been at odds over a proposed change to the terms of service for Claude, which would allow the
military to use Claude and other anthropic AI systems for all legal uses.
Anthropic has said that it's fine with almost all uses except for domestic mass surveillance and autonomous killing machines. So, after we recorded last
machines. So, after we recorded last week's episode, Defense Secretary Pete Hegsth summoned Daario Amade, the CEO of Anthropic, to the Pentagon for a meeting. That was on Tuesday of this
meeting. That was on Tuesday of this week. That meeting was described by the
week. That meeting was described by the Times as civil and by Axios as tense.
Um, so one of those two is is probably true.
>> It can be civil and tense. Our recording
sessions are often feel that way to me.
In this meeting, Hegsth told Amade that Anthropic cannot dictate the terms under which the Pentagon makes operational decisions. Daario Amade in turn defended
decisions. Daario Amade in turn defended Anthropic's commitment to making sure its models are not used for autonomous weapons or mass surveillance. And Hegsth
delivered an ultimatum. Basically, if
Anthropic does not agree to this all legal uses provision by 50:01 p.m. this
Friday, February 27th, the Trump administration would take action in retaliation. One of the things it could
retaliation. One of the things it could do would be to uh designate anthropic a supply chain risk. As we discussed on the show th last last week, that would be a very unusual uh step that is often
used for foreign espionage attempts uh >> and would mean that the government presumably then would not use Anthropics products and would restrict Anthropic from making deals with any of its own contractors.
>> Yes. And Hexath reportedly also threatened that the Trump administration might invoke the Defense Production Act to force Anthropic to make its product
uh restrictionfree for the government.
So, those two things are on the table now if Anthropic does not cave by this 5:01 p.m. Friday deadline.
5:01 p.m. Friday deadline.
>> Yeah. And and that latter um uh threat, Kevin, to invoke the Defense Production Act, there just truly is no precedent that I'm aware of of the government
invoking this to require a company to make software for the government. And
again, this would be software uh that would potentially be able to conduct mass surveillance of Americans or create machines that could kill people without any human in the loop. And I'm not aware
of anyone in the government trying to defend either of those use cases or speak to why it is such a critical priority for the Trump administration that they be able to do this. Um and and
look, I'll say I find it terrifying that any government would do this to its own citizens. So, I hope people are paying
citizens. So, I hope people are paying attention to this because I think this truly has become arguably the highest stakes conflict in AI that we have so far seen between a big lab and a
government. Yeah. I mean, I remember
government. Yeah. I mean, I remember several years ago when people like Daniel Cocatello of AI 2027 were sort of like gaming out what could happen in a world where AI systems get more
powerful. One of the scenarios people
powerful. One of the scenarios people were envisioning is that the government might try to nationalize some of the big AI companies. But this in some ways goes
AI companies. But this in some ways goes even further than that. It's not just saying we're going to try to influence how you're building your models. It's
saying we are going to invoke these unprecedented measures to force you to use your models in a way that we want to use them. And if you don't agree to our
use them. And if you don't agree to our demands, we're going to essentially try to kill the company.
>> Yeah. And and think about what a grim outcome that would be for Anthropic. a
bunch of dogooders who left open AI so that they could try to create safer AI systems. I mean, you want to talk about sci-fi scenarios like like it truly feels like we are living one right now.
Yeah. And another interesting thing that's come out since last week is that the defense department appears to be very committed to using Claude. Uh there
was a great quote in this uh Axios article from a defense official ahead of this meeting between Dario Amade and Pete Hagsath uh in which a defense official was quoted as saying the only reason we're still talking to these
people is we need them and we need them now. The problem for these guys is they
now. The problem for these guys is they are that good. So basically they are saying look if we had a bunch of interchangeable AI models that all had relatively similar capabilities we could
just cut off Anthropic and say we're not going to honor the terms of our contract with you because you won't let us use your models for what we want to use them for. But in a world where anthropics
for. But in a world where anthropics models are better than models from competing AI companies they really don't want to make that trade-off. They really
don't want to go with what they consider a second tier model here. It would also be complicated because as you said anthropics models are the only ones that are approved for use in classified
systems. So I think this is really also illustrating something that Anthropic has believed since early in its existence which is that the way that you influence safety, the way that you get
leverage in these negotiations is by having really good models. Dario Amade
has this uh phrase race to the top where he basically thought that if Anthropic was on the frontier was sort of competitive with the leading AI companies in the world then policymakers
and large government agencies like the defense department would be forced to take them seriously. And I think what we're seeing now is that a he was correct. Anthropic does have leverage
correct. Anthropic does have leverage because its models are very good. and B,
it might not matter if the government can just force you to do something you don't want to do.
>> Yes. But I would point out, Kevin, how incoherent the administration's response um has been because they're saying two contradictory things, right? One is uh we're not going to use you and we're
going to try to get other people to stop using you and the other is we're going to force you to let us use you. Right?
So to me that is just consistent with an administration that only knows the language of threats and dominance.
Right? Like that is there's no negotiation. and there's nothing to
negotiation. and there's nothing to discuss. We get exactly what I want or
discuss. We get exactly what I want or we are going to hurt you as much as we can. Um, but I think it's just so
can. Um, but I think it's just so notable that even within that, it seems like the military can't figure out what it wants to do with these guys.
>> Yes, it it is a classic case of an unstoppable force meeting an immovable object. My understanding is that
object. My understanding is that Anthropic is not going to budge on these two carveouts that they want. Now, there
was some confusion about Anthropic's safety position this week because the company also changed its responsible scaling policy, its RSP, which governs
how it releases new models and the safety protections it applies to them.
Um, some people thought these things were related, but my understanding is that these are separate issues. And that
when it comes to this specific dispute with the Pentagon, Anthropic is still holding firm to its belief that it doesn't want Claude being used for mass domestic surveillance and autonomous
killing weapons. Um, and they feel like
killing weapons. Um, and they feel like they can suffer whatever the hit might be to their business if it means that they don't compromise on their on their values. And by the way, what a great
values. And by the way, what a great marketing uh campaign for Anthropic, which gets to stand up and say, "We are the only AI lab that is committed to not letting our models be used for these terrifying use cases."
>> Yeah. I've already thought of a really good Super Bowl ad for them next year.
They could say, "Murder is coming to AI, but not to Claude."
>> Right. So, what are you looking for after this meeting uh or this deadline on Friday at 5:01 p.m.? Well, like you said, I mean, based on Daario's public statements, I think that he is not going
to back down. I think in some sort of strange way, like this is the fight that they wanted, right? Like, think about how long we've been talking about the show on AI safety and for how long people have been um mostly avoiding it.
Well, it's like now here it is like, you know, one of the the main public policy issues up for debate in the United States right now. And I think Anthropic is willing to lose this however it has to if only to make the point that these
systems are getting very close to being able to do some very dangerous and scary things. So I expect Anthropic to stick
things. So I expect Anthropic to stick to its guns. And so to me the question is just what consequences does it suffer as a result? Yeah. And also I think there's an issue here of like what the other AI companies will do in response.
Right. We've already seen a few employees of companies like Google and Open AI speaking up in Anthropic's defense, saying uh it would be a very bad thing if the government compelled or
forced Anthropic to use their models for these things that they don't want to do.
But so far, the leaders of these other AI companies have been mostly silent about this issue. I think they are glad to let Anthropic take the heat on this one, but they are all going to find
themselves in similar situations at some point down the line if they continue to pursue these giant military contracts.
>> They will, but based on what we know so far, we should expect them to roll over.
Like, it has truly been nothing but profiles and cowardice over at these other companies.
>> Yeah. But but I'm also going to be looking for some of the political uh response to this because, you know, last week we were sort of talking about why why no one in civil society or in
government seems as worked up about this as as we were. Um I think that's changed over the past week. We're starting to see some elected officials, some civil liberties groups sort of realizing that what's going on right now has big
implications for the future, not just of the military's use of technology, but for the freedom and the uh sort of ongoing operations of some of our largest and most advanced technology
companies. And I think this conflict
companies. And I think this conflict between the Pentagon and Anthropic will be seen for for many years as sort of the first standoff between industry and government when it came to advanced AI.
>> Yeah. But hopefully not the last one with the way things are going. Yeah.
Okay. So, that is the latest on the anthropic story. Stay tuned for more on
anthropic story. Stay tuned for more on that. Next up on our system update, we
that. Next up on our system update, we have an update on OpenClaw. This is, of course, the open-source agentic AI tool that became very popular earlier this
year. Uh, people were buying Mac minis
year. Uh, people were buying Mac minis and setting this thing up on their computers and letting it run their entire lives. And we've heard a lot of
entire lives. And we've heard a lot of good stories about how that has been going for people. And this past week, we heard a very bad story. Boy, was it.
Today's story comes to us from Summer U.
She is the head of alignment at Meta AI and she had a expost that got a lot of attention this week uh reporting that Open Claw had ignored her instructions
and tried to delete her entire email inbox. Frankly, that sounds like a dream
inbox. Frankly, that sounds like a dream to me. But I guess she had some emails
to me. But I guess she had some emails that she wanted to respond to. Summers
said that after testing her open claw on what she called a toy email account and finding it useful, she asked her agent to check her real inbox and suggest, you know, what would you archive or delete?
And she says she said, "Don't action until I tell you to." But instead of confirming with her, Kevin, as she requested, it diverted to a nuclear option. And started deleting her entire
option. And started deleting her entire inbox. Again, I want to make clear this
inbox. Again, I want to make clear this is what I want my agent to do. For me,
for summer, it was a problem. And I
guess despite repeated attempts to get it to stop by prompting it via a telegram interface, her bot ignored her and she had to run to her Mac Mini. In
her words, like she was diffusing a bomb to get it to stop. So why did this happen? Well, she thinks that her real
happen? Well, she thinks that her real inbox was just too big and it triggered compaction, which is when essentially you run out of context window using whatever model you're using and that
during compaction it lost her original instruction. Uh Kevin, have we ever had
instruction. Uh Kevin, have we ever had a bigger case of I told you so on the hard for program?
Uh no, I think this this takes the cake.
And I will say um this is exactly why I have not installed OpenClaw on my uh laptop and given it access to my files.
These systems are still very unpredictable. It is very high-risk
unpredictable. It is very high-risk behavior. Um, I think there's a case to
behavior. Um, I think there's a case to be made that it's actually good if the people doing alignment research at some of our leading AI companies are experiencing the downsides of these systems for themselves. Um, it's sort of
like if it doesn't happen to you, you won't think it's a problem for other people. Um, so I think there's sort of a
people. Um, so I think there's sort of a a sort of counterintuitive case that this was good for alignment, but I think it was also very funny just to see someone who clearly understands this technology and what it's capable of just getting absolutely mogged by it.
>> Absolutely. And you know, um, one element that I would also draw folks attention to on this is that it is so easy to spend an afternoon using AI systems, convincing yourself that you're
making yourself massively uh, productive and giving yourself a ticket out of the permanent underclass, and then you look back and just realize that you've wasted the day. And I would just hope that you
the day. And I would just hope that you continually bring your attention back to that. Uh because I think figuring out
that. Uh because I think figuring out what is a use of my time with AI that improves my life and what is simply a waste of time can be tricky to discern.
Uh but you're going to want to keep your eye on it or you're going to have a lot more terrible afternoons like uh poor Summer did. I think this is a good
Summer did. I think this is a good cautionary tale and uh also a good all-purpose excuse the next time someone asks why you haven't responded to their email. Just say my OpenC agent just mass
email. Just say my OpenC agent just mass deleted all of my emails.
>> So for our final update today, Kevin, we wanted to revisit Alphas School. Yes,
this is the sort of AI powered education uh company that is uh running schools around the country. We interviewed
McKenzie Price, one of the co-founders of Alpha Schools on the show last September and almost immediately we got we started getting emails from listeners to the show saying uh this sounds a
little far-fetched. Are you sure this
little far-fetched. Are you sure this company is everything it advertises itself as? And Casey, what has happened
itself as? And Casey, what has happened since? Well, there have been two reports
since? Well, there have been two reports that we wanted to highlight that suggest that uh all is not well at Alpha School.
Uh 404 media published a big story last week that drilled into uh some of the critiques. For one, apparently some of
critiques. For one, apparently some of these AI generated lesson plans uh just aren't very good. Kevin, they
highlighted some examples where the curriculum was like essentially just showing students like slop that had like no correct answer because they were just
worded wrong. Uh there were also just
worded wrong. Uh there were also just accuracy problems. um they estimated that uh they had a 10% hallucination rate for some of these generated um
materials violating their terms of service and it's collecting lots of data on students which frankly I would expect but apparently stored at least some of that data insecurely in a Google drive
that anyone with the link could access.
So that wasn't great. There was also a report in Wired that came out in October uh where they focused specifically on the office school that was opened in
Brownsville, Texas. So, uh, some parents
Brownsville, Texas. So, uh, some parents at that school at least felt like the promise of alpha school that we had heard about last September was not realized for their kids. Yeah. And I
also heard from one parent who attended an alpha school information session recently. Uh, and this parent came away
recently. Uh, and this parent came away thinking that uh, the school was quote the therronos of education. Uh,
according to this person, there was some fake interactivity on the screen during the session in the form of some pre-recorded emojis and that the CEO only appeared on camera uh late into the
session after parents started asking, "Hey, are we live or is this some pre-recorded uh canned presentation or not?" So, Casey, does any of this change
not?" So, Casey, does any of this change your view of Alpha School that you had coming out of the interview with Mackenzie Price last September? Uh yeah,
I mean like look, I did think that there were uh several things that McKenzie mentioned that seemed interesting. It
was like, oh well, like if that worked, that might be sort of an interesting way of educating uh your kid. I think what we are learning is that yeah, it's hard to create a new school from scratch and
maybe there are some corners being cut here and maybe they're not executing as well as they hope to on some of their dreams. I mean, I think, you know, if if you're having like hallucinations in curriculum, I think that's like pretty much as bad as it gets for a school like
that. Like, they need to get that down
that. Like, they need to get that down to zero, right? Like, if you can't verify that your curriculum is accurate, like I don't know that you should be able to call yourself a school. If I can be a little controversial though, like
the 404 media uh story, their headline is quote, "Students are being treated like guinea pigs." Uh, like which is a quote from the story. And I just kind of think that like at most schools, students are being treated like guinea
pigs. Education is always changing.
pigs. Education is always changing.
Every school I've ever been to has been running one sort of new program or another trying to like, you know, build a better mousetrap. And I think if you were a parent, you were considering sending your child to a private school
that was very different from public school, you're probably like up for at least a little bit of that kind of experimenting, right? Obviously, most
experimenting, right? Obviously, most people are never going to choose anything like this, right? And I think the question is sort of what are the outcomes for the students who do? The
second thing I would say is kids just have different outcomes at schools, right? Like I think you could go to any
right? Like I think you could go to any school in America and if you interviewed every parent, you'd have some parents that absolutely love the school, they love their teachers, and you'd have some that absolutely hated it. And that there would be a lot in the middle, right? So
I don't want to overindex on a couple reports. I'm I'm perfectly willing to
reports. I'm I'm perfectly willing to believe everything that is in these reports. And I believe that these people
reports. And I believe that these people had terrible experiences, but it's hard to know what is a representative sample and what is a couple of grumblers. Yeah.
And I'll just say like I what I appreciated about Mackenzie Price and Alpha School was not so much the specific details of the school or the curriculum or the way they were
approaching education. It was purely the
approaching education. It was purely the fact that they were saying to themselves and to their parents like something big is happening here in education. AI is
not just like some classroom tool the way that maybe Chromebooks or other technologies have been. It is something that is fundamentally reshaping how people learn and how people can learn.
And so that's the the kind of thing that I would encourage people to keep doing.
Yes, there will be some failed experiments. Yes, there will be some
experiments. Yes, there will be some things that don't work out. But I think in general, the more that educational institutions uh can sort of realize that they are being transformed whether they want to be or not, the better the
outcomes for students are likely to be.
>> Yeah. But let me say this. If you're
running a school and it looks like identical to what a school would have looked like 20 years ago, you're also treating your students like guinea pigs.
And I'm not sure we're going to love the result of that experiment. Okay, so
Casey, that is our system update. Now
our listeners are fully up to speed and I expect that our inbox traffic will trickle to zero now that we've satisfied all these concerns.
>> Well, I can't tell. My Opac actually just deleted my inbox, but I told it to, so it's fine.
Loading video analysis...