LongCut logo

AI Is Not Improving Productivity: Nobel Laureate Daron Acemoglu

By MIT Sloan Management Review

Summary

Topics Covered

  • Technology Lacks Pre-Ordained Destiny
  • Choose AI Automation or New Tasks
  • AI Excels as Pro-Human Information Tool
  • Productivity Stagnates Despite Innovation
  • Individuals Shape AI's Pro-Worker Path

Full Transcript

Hi listeners, we're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at

mitsmr.com/mpodcast survey. You'll receive a complimentary

survey. You'll receive a complimentary download of MITSMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at

AI. Please take the survey this month at mitsmr.com/mpodcast survey. We'll put that link in the show

survey. We'll put that link in the show notes. And thank you for your help.

notes. And thank you for your help.

Hi everyone, we're back with a bonus episode profiling another thought leader in the technology [music] research space. MIT Institute professor Dana

space. MIT Institute professor Dana Samoklu is a Nobel [music] Prizewinning economist and the author of Power of Progress. He joined Sam today for a

Progress. He joined Sam today for a conversation spanning technological advancements limitations [music] and regulation. We're back on March 10th

and regulation. We're back on March 10th with more new episodes. For now, we hope you enjoy this conversation. [music]

I am Darra Samog, Institute Professor at MIT, and you are listening to Me, Myself, and [music] AI.

>> Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review, exploring [music] the future of artificial intelligence. I'm Sam

artificial intelligence. I'm Sam Ransbotham, professor of analytics [music] at Boston College. I've been

researching data, analytics, and AI at MIT SMR since 2014 with research articles, annual industry reports, [music] case studies, and now 12 seasons

of podcast episodes. On each episode, corporate leaders, cutting edge researchers, and AI policy makers join us to break down what separates AI hype

from AI success.

Hi listeners, thanks again to everyone for joining us. I'm excited to be talking with Duran Asamoglu, professor of economics at MIT. Duran works

extensively on economic development, labor economics, and the economics of technology. In 2024, he was awarded the

technology. In 2024, he was awarded the Nobel Prize in economics for this work.

His insights on the interplay between institutions, technology change, and inequality are particularly relevant for today's businesses. Of course, our listeners

businesses. Of course, our listeners will be most interested in Don's thoughts on AI. Don, great to have you on the podcast.

>> It's my pleasure. Thanks, Sam.

>> Okay, so your work spans institutions, technology, and equality. Can you share some of the themes in general from your past research?

>> I got into economics because I was fascinated by what I saw around me in my very young teen years about very

divergent economic, political, and social outcomes across countries.

huge disparities in terms of wealth, in terms of poverty.

And those interests have framed my research and my focus on institutional factors which determine the effects of

history, the effects of how society is organized, the rules, the laws, the norms and technology as the prime channel via which human ingenuity and

human decisions impact economic productivity and economic well-being.

And throughout I have been fascinated by the interplay between institutions and technology and by how institutional factors and technological factors have

evolved over time. So a lot of my research has focused on for example why there has been a huge divergence in

economic fortunes of different parts of the world since the 16th century or thereabouts. It is very much related to

thereabouts. It is very much related to for example the fact that European powers colonized the rest of the world and shaped the institutional trajectories of very different nations

around the world in very diverse ways.

And I've also been fascinated by the industrial revolution and how we started this process of using knowledge,

science, and various skills in improving the way that we can actually start producing goods and services.

>> That's all really salient for what's going on right now. You have a recent book, Power and Progress, and I think I was reading the preface of a revised edition perhaps, where you noted that

things sort of changed on you underfoot.

How has the recent changes changed some of your thinking?

>> Well, I think two things are worth noting there. The main thesis of power

noting there. The main thesis of power and progress is that technology does to some extent what we want it to do. It

does not have a pre-ordained destiny that will take us in one direction or another. We have a lot of agency, a lot

another. We have a lot of agency, a lot of choice in shaping the future of technology. And different futures

technology. And different futures correspond to different winners and losers, different benefits, different costs, different productivities.

We try to make that point by going into history showing how critical periods during our recent history like the last 1,000 years

have led to sometimes big technological breakthroughs but with huge losers. And

sometimes those forces have been reversed and gains from technological betterment have been shared more equitably.

So that message I think is more relevant today than ever. AI is an particularly versatile technology. It provides so

versatile technology. It provides so many different futures for us and the narrative that there is a determined

natural future of AI and we are all going there whether we want it or not and and ultimately we're all going to become incredibly more prosperous out of that is just simplistic and fighting

against that narrative I think is very important today because that narrative lulls us into a sense of helplessness and sense of complacence that could be

quite costly. On the other hand, of

quite costly. On the other hand, of course, in 2021, 2022 when we were writing, it was impossible to foresee how rapid some of the advances in

generative AI would be. [sighs]

But those advances haven't really changed the basic trade-offs and the basic messages that we wanted to convey

in the book. I talked at a high level about different directions of AI. What

are they? I think simplifying it, you have a couple of poles that are pulling in different directions. I would single out in the production process automation

which is the dream of most AI models today especially under the banner of artificial general intelligence AGI

which aims for large language models or other generative AI tools to reach levels of capabilities

comparable to the best workers across a very wide range of domains.

The reason why that is viewed as attractive is that just like previous rounds of software

that improved cognition in different domains that can then be used for automating tasks. So AGI

tasks. So AGI is very tightly interwoven with the automation agenda.

Automation is great. It gets rid of some routine tasks. It can rid of some boring

routine tasks. It can rid of some boring tasks. When it's applied in the physical

tasks. When it's applied in the physical domain, such as with cranes or robots, it could remove the most dangerous tasks

from the human work schedule. But

automation also doesn't benefit workers by itself. It takes away tasks from

by itself. It takes away tasks from workers. It is beneficial to capital and

workers. It is beneficial to capital and capital owners and not so much for workers in general.

So at the other pole we have things that are complimentary to humans. Meaning that technology enables

humans. Meaning that technology enables humans to do more things or better things or completely new things. So then

these new things is what I refer to as new tasks. So if you look at people

new tasks. So if you look at people around you, many of the occupations

you'll see involve things that could not even be imagined 50 or 60 years ago.

As a journalist, you're going to be making video casts and podcasts and use technologies for research that require completely different skills than

somebody 60 years ago going to the library and sifting through books. So

that those are some aspects of new tasks. So are many of the physical

tasks. So are many of the physical occupations in manufacturing that involve much more technical work.

those have generally been very good for productivity and for worker wages and employment. So that's one dimension in

employment. So that's one dimension in which the future of technology could have very different effects depending on whether we go on the automation or the

new task direction.

I would also like to add whether we use technology for information centralization or decentralization is also important

in that many of the early hopes about computers were centered on decentralization.

People could do in their garages things that IBM as a centralized organization couldn't do.

And personal computers enabled that to some extent. Not anywhere comparable to

some extent. Not anywhere comparable to the hopes of pioneers of computing in the 60s and the 70s.

But today we are going in the opposite direction.

Large language models are information centralization tools. They collect all

centralization tools. They collect all of the information.

They aim to collect all of the information of humanity ultimately and then centralize that and process that in a centralized manner that then gives you answers and then so there's less for the

decentralized human mind and human participation to do. The centralization

and automation are two different poles but they are complimentary. So when I'm talking about new tasks, it is really about enabling the technology to go in a

direction that can really help workers, help individuals, not just big corporations. So it's going back to

corporations. So it's going back to those aspirations that were already present in the late 1960s and 1970s. And

my work shows how new tasks when they have been activated have led to productivity gains and have led to wage gains and employment gains.

So if we think about these new tasks though, what kinds of things should businesses be looking for? If people buy this argument and then they want to go down this path, what do they need to do?

Actually, my thing is disarmingly simple.

AI is really an information technology, a very powerful information tech. It's

not an automation technology. AI is not thinking anywhere like the human brain. Instead,

it has some truly impressive capabilities that the human brain doesn't have. And it lacks some of the judgmental and creativity related capabilities that the human

brain naturally has. As an information technology, what AI is very good at is sifting through gargantuan data sets and

find relevant context and information for some specific task or specific context or specific application. So if

you're an electrician and you encounter an equipment that is behaving in a way that you haven't seen before or a completely new equipment that you don't

have experience with and if you have the right AI tool that can immediately and reliably give you information about why that sort of unexpected behavior is

occurring or what are the things you need to know about this equipment and how it interacts with the particular particular type of electricity grid or the environment that it is situated in.

Those are the kinds of things that regular electricians would have to work decades to get the experience in an imperfect way. So we can significantly

imperfect way. So we can significantly improve what electricians, what nurses, what educators, what journalists, what academics could do using AI in order to

perform more sophisticated tasks or new tasks and acquire much better information. And I think while AI,

information. And I think while AI, generative AI together with the right sort of scaffolding from good oldfashioned AI that does pattern

recognition could provide that kind of ideal tool for human new tasks. That's not the direction in which AI is being

developed. In fact, none of the big

developed. In fact, none of the big companies are pouring even a small fraction of their investment into developing AI as a prohuman proworker

tool.

>> Well, let's connect these last two points a little bit. These are, as you say, being developed by big companies.

When I think about the electrician in your scenario, wouldn't they naturally get recommended solutions that come from let's say advertising models that are built into the large language model?

>> Right. Right now today you can as an electrician you can take GPT with you and you can ask questions but there are

several problems with that. First of all it has not been designed or optimized for that task.

Second, it's not reliable.

So a much higher degree of reliability is necessary. Third, it has not been

is necessary. Third, it has not been trained on the domain specific information that all of the relevant electrical equipment and deep understanding of the electrical

laws and electronics that would be necessary. And most importantly, it has

necessary. And most importantly, it has also not been trained on use cases of best electricians dealing with similar problems from which

AI could learn. So it is not designed for that task and it hasn't been trained with high quality domain specific data

and all of those restrict your ability to use chat GPT or similar tools and that's the reason why whenever employers

are given a push towards using them the first thing they want to do is just use them for automation because that just seems to be the path of least resistance. to think about that a little

resistance. to think about that a little bit more. There's nothing that says that

bit more. There's nothing that says that we couldn't train those models over those domain specific knowledge bases.

Maybe we're just early days and that could come out. And I I think that's plausible, but I'm not sure of the economic incentives for people to do that. The economic incentives are not

that. The economic incentives are not there because this is not the business model of the leading corporations. That

data doesn't exist and it won't exist unless we have property rights in data and we have proper data markets.

The current architecture of large language models may create hard limits on reliability. Whereas in situations

on reliability. Whereas in situations like this, reliability could be a very important constraint. So for example,

important constraint. So for example, imagine we do this with nurses and one in a thousand time they give you the complete opposite of they what they should do and you poison the patient. I

think one in a thousand seems very small but actually in medical applications that will be an unacceptably large casualty rate. So it's really a

casualty rate. So it's really a different architecture and different sort of preparation training of these models that may be necessary.

>> Yeah, I think your error rate is an interesting one because I'm not really sure what I think about that. Half the

nursing students graduate in the bottom half of their class. That's just how averages work.

>> But as a result, we don't allow nurses to make those decisions at the moment.

So except in a few cases where you have highly trained licensed practitioner nurses, nurses cannot prescribe drugs.

They cannot make emergency decisions.

When a patient is having problems, they have to wait for a physician to come. So

that's the margin that we're talking about. And nurse complimentary

about. And nurse complimentary technology would expand what nurses do in those domains. And no, you couldn't do that unless all of the nurses become even better trained than licensed practitioner nurses or the AI models get

much better.

Let's push on the nursing example a little bit more. My daughter has recently learned how to drive. I mean,

I'll make you nervous. I think we both live in the same area. So, no, she's a good driver, though. But she hasn't seen millions of almost wrecks yet. And I

would love for her to have that experience. You know, by analogy, the

experience. You know, by analogy, the nurses may not have seen these esoteric cases in a way that we were just talking about. These AI models are are fabulous

about. These AI models are are fabulous at storing lots and lots of information and recalling that.

>> I think there are many things that can be done. The future of technology is

be done. The future of technology is rich. If you integrate AI with virtual

rich. If you integrate AI with virtual reality, you can have personalized experiences where you know your daughter could experience

very dangerous situations sitting in front of a computer. And I can tell you from my own experience, you know, when you get behind a wheel, you think you know and you don't.

>> We talked a little bit about incentives.

Let's talk about measurement a little bit. I think the issues we've always had

bit. I think the issues we've always had is we can measure the number of widgets.

We have a lot of trouble measuring the outputs of our knowledge economy. How

important is measurement and is there anything that we can do to try to improve that? Well, I think measurement

improve that? Well, I think measurement is very important and there are some puzzles that we should bear in mind and I think these puzzles do

feed into my concerns and also skepticism about some of the claims. We definitely do live in

an age of innovation according to many measures. If you look at the number of

measures. If you look at the number of patents at the USPTO, they have quadrupled over the last 40 years. We

get an incredible array of new apps every day on our phones. We have much faster turnover of electronics

in quite a significant way. I mean, when I use my iPhone that's a couple of years old, everybody says, "Wow, you're really missing out." And you know, when people

missing out." And you know, when people were using rotary phones, dial phones, you know, you could use the same model for 30 years and nobody would bat an

eye. So there is a sense in which we are

eye. So there is a sense in which we are getting a lot of innovations but using the standard measures of economists we don't see much improvement in

productivity in fact we're having slower productivity improvements today than we did in the ' 50s60s7s those boring pre-digital days what's up with that well the people from Silicon Valley and

economists who are sympathetic to that perspective would say that's all measurement problem you're just not making allowance for how high quality

some of the products you're getting now is and [clears throat] the Bureau of Labor Statistics is overestimating inflation.

You have in the middle of your palm a super computer super powerful machine that allow you to access information like never possible before. So all of

these things they think are the reasons why you shouldn't look at macroeconomic data. we should ignore all of the

data. we should ignore all of the economists core data sources.

There's some truth to that, but I think it can be exaggerated. You know, we did not measure the benefits from antibiotics that well either, but you still got amazing improvements on many

directions in terms of GDP, in terms of output of the pharmaceutical sector and lives saved. Life expectancy

saved. Life expectancy increased tremendously with antibiotics.

Well, life expectancy is not increasing.

We're not seeing any of the AI facilitated pharmaceuticals do anything yet. So perhaps time will change that.

yet. So perhaps time will change that.

But but we just don't have objective measures that show huge gains from AI as of now.

>> [sighs] >> I don't think that's just a measurement problem, but measurement can help understand where the bottlenecks are and also improve perhaps certain [snorts]

assessments of what the impact of AI is in different sectors. But I think a lot of it again comes down to what I was talking about. If you overdo automation,

talking about. If you overdo automation, if you overdo information centralization, you're not actually going to get all that promised productivity boom. So if you know I'm

productivity boom. So if you know I'm bought in, what do we need to do here? I

mean as an individual, what does an individual need to do given that I can shake my tiny fist against the fang companies? What should individuals be

companies? What should individuals be doing here?

>> I think a lot. At the end of the day, society consists of individuals. If a

lot of individuals change their mind, that has an effect. Part of the reason why tech companies have so much power is because they have what Simon Johnson and I called in power and progress

persuasion power. They have persuaded

persuasion power. They have persuaded the rest of society that their intentions are benign, their technology is good and they will not misuse it too much.

There's a lot of counter evidence to that, but we still sort of believe it.

We still believe the leading AI companies when they say we have this amazing god-like technology believe that it's godlike and that it

will be used just in your service your own personal god. So you know absolute power corrupts absolutely I don't know that we should really believe those

claims. So different individuals will have to reach their own conclusions. But

enough individuals a critical mass of them changing their views would have an effect through the democratic process.

And you know who are individuals who have a lot of say [gasps] hundreds of thousands of people perhaps more who work as engineers and scientists in these corporations. They

determine the [clears throat] direction of research. if they decided next year

of research. if they decided next year that they want to work not on automation and AGI but developing more proworker prohuman technologies that will help

workers and human decision makers and decentralization that's what we would get. So that's an individual decision.

get. So that's an individual decision.

Another individual decision is entrepreneurs.

You know a lot of new ideas come from startups. Right now startups are aligned

startups. Right now startups are aligned with the big companies because their dream is to be bought up by the big companies. That's the way you become a

companies. That's the way you become a billionaire right now. Well, again,

that's a choice. Different values,

[clears throat] different priorities, different regulatory systems. Perhaps we should really be much more vigilant in mergers and acquisitions. Then that

could lead to very different dynamics.

>> Good. I hope our students are listening because I do think that most of our students are out there trying to come up with startups with their goal of being an acquisition by one of these large companies. And you know,

companies. And you know, >> if I wanted to be rich, that's what I would too.

>> Mhm. Yeah. So maybe our measurement problem extends both to our productivity as well as to our incentives if that's how we're measuring success.

>> Yeah.

>> You mentioned regulation. Let's touch on that a minute. I tend to think well we can have a market forces perhaps do a better job of aligning incentives than regulation. What can regulation do here

regulation. What can regulation do here particularly when we are dealing with goods that are not physical goods?

>> Well I would like to say three things about regulation. First of all,

about regulation. First of all, regulation is always tricky.

And look at Europe. And Europe is so far behind in AI and many areas of tech because they've not been very conducive

to innovation via their regulatory system. Too many organizations, too much

system. Too many organizations, too much interference, that can be [clears throat] very bad. So you you have to balance

very bad. So you you have to balance things. Second, some regulation

things. Second, some regulation on health critical, information critical, democracy critical things is absolutely necessary. You cannot let AI

absolutely necessary. You cannot let AI models pretend to be doctors without having some sort of assessment that they are actually giving adequate information.

We apply tremendous barriers to anybody becoming a quack doctor while we should apply the similar

standards to AI models.

But most importantly, we may need a change in the philosophy of regulation.

Regulation should be not a reactive thing where oh we try to stop whatever AI companies are trying to do. I think

we need proactive regulation that helps the AI industry move in a more socially beneficial direction. And that starts by

beneficial direction. And that starts by recognizing what this socially direction is. And I've argued it's proworker, new

is. And I've argued it's proworker, new tasks, more decentralization.

It then recognizes why the current playing field is tilted against it and tries in a soft way without you know

stopping or killing the market process trying to correct those distortions and provide a living chance to the alternative directions.

>> That's a moment of hope there. That's

good. Let me switch a little bit. Our

show is me, myself and AI. Let's let

people get to know you a little bit. How

did you get interested in these things?

I've been always interested in technology as the engine of the industrial revolution of the rapid growth process

and that brought me together with my studies of labor markets to focus on automation.

So I've been working on automation for over 20 years and then when AI models started making rapid advances in the mid

2010s I got worried about what that would imply from this aspect of the future of work what it would imply for

wages and employment and that made me invest more time and resources into AI and understanding AI understanding it societal implications but also understanding the technology and I think

it's fascinating fasating. It's super

promising, but also super scary.

>> I think that's a nice way to wrap up that balance that you keep coming back to. One of the things we like to do in

to. One of the things we like to do in the show is ask you a bunch of rapid fire questions. You know, top thing top

fire questions. You know, top thing top of your mind. What did you want to be when you grew up? When you were a kid, what did you want to be when you first were thinking about a career?

>> I wanted to become a social scientist.

>> Oh, so okay, that worked out for you.

Then what's the biggest misconception that people have about artificial intelligence?

that it will somehow completely replace humans. I think at the end AI will be

humans. I think at the end AI will be something that works alongside humans and the better we understand that and how to achieve that the better we will be in shaping the future of work and the

future of humanity.

>> How do you personally use these tools?

>> I use it just like other people. I

sometimes ask questions to chat GPT and you know most of the time I am both surprised by how good it is and disappointed that you know if I really

trusted everything I got from it, I wouldn't be doing so well.

>> Yeah, I have to push back a little bit there. I find that if I know something

there. I find that if I know something about a subject and I ask a question, I'm disappointed in the results.

Exactly. And if I don't know much about the subject, then I'm impressed with the results. That's it. That ought to worry

results. That's it. That ought to worry me because >> even when I know about the subject, I am impressed by you know how good it is

able to synthesize the basic knowledge there but it's just it always pretends to know more and gives answers that are really incorrect

because it's extrapolating too much.

>> What has moved faster than you expected with artificial intelligence?

>> Oh, the large language models. I mean,

their capabilities, their reasoning capabilities are truly impressive.

>> All right, it's been great talking to you. This has been a fascinating

you. This has been a fascinating conversation. I love your balance of

conversation. I love your balance of both optimism and concern. And I think that's a nice way to wrap up this session. Thanks for taking the time to

session. Thanks for taking the time to talk with us.

>> Thank you, Sam. This was a lot of fun.

>> Thanks for [music] listening. Me,

Myself, and AI season 13 premieres on March 10th. Please join us.

March 10th. Please join us.

Thanks for listening to Me, Myself, and AI. Our show is able to continue in

AI. Our show is able to continue in large part due to listener support.

[music] Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcast review or a rating on Spotify.

[music] And share our show with others you think might find it interesting and helpful.

Hi listeners, we're running a short survey to learn more about our audience so that we can continue to bring you a podcast you find helpful. If you have a moment, please take the survey at

mitsmr.com/mpodcast survey. You'll receive a complimentary

survey. You'll receive a complimentary download of MITSMR's executive guide, How to Manage the Value of Generative AI. Please take the survey this month at

AI. Please take the survey this month at mitsmr.com/mpodcast survey. We'll put that link in the show

survey. We'll put that link in the show notes. And thank you for your help.

notes. And thank you for your help.

Loading...

Loading video analysis...