LongCut logo

'Godfather of AI' Geoffrey Hinton warns AI has 'progressed even faster than I thought'

By CNN

Summary

Topics Covered

  • Part 1
  • Part 2
  • Part 3
  • Part 4
  • Part 5

Full Transcript

2025 was the year artificial intelligence or AI took the world by storm, impacting nearly every aspect of our lives. Time Magazine named the architects of AI its persons of the year, crediting them with, quote, transforming the present and transcending the possible. AI has

an enormous potential to change our world for the better, driving innovation and productivity, accelerating scientific breakthroughs, and helping to solve our most intractable problems. But, but. AI could also make millions of jobs obsolete and fuel the loneliness epidemic

but. AI could also make millions of jobs obsolete and fuel the loneliness epidemic and further warp our ability to distinguish between fact and fiction. So today,

in a special episode of State of the Union, we're gonna devote the entire hour to this one topic, how this technology is upending the status quo, where AI goes from here, and whether the benefits actually outweigh the risks. And

joining me now is the man credited with laying the foundation for the AI revolution, the godfather of AI, Nobel Prize winning computer scientist Jeffrey Hinton. Professor, thanks for joining us. So your research on neural networks paved the way for this modern AI boom.

us. So your research on neural networks paved the way for this modern AI boom.

I interviewed you two years ago right after you quit Google and you first began in warning the world about what you saw as the risks of AI. When you

look at how AI has progressed since then, are you more or less worried about it? I'm probably more worried. It's progressed even faster than I thought. In particular, it's got better at doing things like reasoning and also at

I thought. In particular, it's got better at doing things like reasoning and also at things like deceiving people. What do you mean by deceiving people?

So an AI, in order to achieve the goals you give it, wants to stay in existence. And if it believes you're trying to get rid of it, it will

in existence. And if it believes you're trying to get rid of it, it will make plans to deceive you so you don't get rid of it. NVIDIA CEO Jensen Wang said recently about AI, quote, every industry needs it, every company uses it, and every nation needs to build it. This is the single most impactful technology of our time. Do you agree with that assessment? I agree that it's the single most impactful

time. Do you agree with that assessment? I agree that it's the single most impactful technology of our time, yes. Do you think the AI revolution could have a similar impact on society as the creation of the Internet or even the Industrial Revolution in the 18th century or even bigger than that? I think it's at

least like the Industrial Revolution. The Industrial Revolution made human strength more or less irrelevant. You couldn't get a job just because you were strong anymore. Now

irrelevant. You couldn't get a job just because you were strong anymore. Now

it's going to make human intelligence more or less irrelevant. You

and we in the media tend to focus on some of the downsides of AI.

There are positives, obviously, otherwise you wouldn't have worked on it early on. A lot

of people are working to use this technology to benefit humanity as well, to lead to advances in medicine and the like. But you think the risks from AI outweigh the positives? I don't know. So there are a lot of

the positives? I don't know. So there are a lot of wonderful effects of AI. It'll make healthcare much better. It'll make education much better. It'll

enable us to design wonderful new drugs and wonderful new materials that may deal with climate change. So there's a lot of good uses. In more or less any industry

climate change. So there's a lot of good uses. In more or less any industry where you want to predict something, it'll do a really good job. It'll do better than people were doing before, even things like the weather. But along with those wonderful things come some scary things, and I don't think people are putting enough work into how we can mitigate those scary things. You come from the tech world, obviously. Do

you think the Silicon Valley CEOs building these systems are taking the risks seriously at all? Do you think that they are driven mainly by financial interests? A lot of people are going to get very wealthy off this. I think

interests? A lot of people are going to get very wealthy off this. I think

it depends which company you're talking about. Initially, OpenAI was very concerned with the risks, but it's progressively moved away from that and put less emphasis on safety and more emphasis on

profit when it comes to regulation of AI, putting some

sort significant testing to make sure those chatbots won't do bad things. Like now, for example,

encouraging children to commit suicide. Now that we know about that, companies should be required to do significant testing to make sure that won't happen. And of course, the tech lobby would rather have no regulations, and it seems to have got to Trump on that. And so Trump is trying to prevent there being any regulations, which I think

that. And so Trump is trying to prevent there being any regulations, which I think is crazy. You know these tech CEOs. I don't.

is crazy. You know these tech CEOs. I don't.

When one of them learns that an AI chatbot has talked a child into suicide, what is it that stops them? What is it that, I mean, my impulse would be, well, holy smokes, stop AI right now until we fix this so not one other kid dies. But they don't do that. Can you explain to us what their

thinking is, if anything? Well, I don't really know their thinking. I

suspect that. They think things like, well, there's a lot of money to be made here. We're not going to stop it just for a few lives. But I also

here. We're not going to stop it just for a few lives. But I also think they may think there's a lot of good to be done here. And just

for a few lives, we're not going to not do that good. For example, for driverless cars, they will kill people, but they'll kill far fewer people than ordinary drivers.

So it's worth it. Tech, you have said that you think there's a 10 to 20% chance that AI takes over the world. People at home, I hear that they might think it sounds like science fiction. It's alarmist, but that's a very real fear of yours, right? Yes, it's a very real fear of mine and a very real fear of many other people in the tech world. Elon Musk, for example, has similar

beliefs. You wrote that 2025 was a pivotal year for artificial intelligence, for

beliefs. You wrote that 2025 was a pivotal year for artificial intelligence, for AI. What do you think we're going to see in 2026? I

AI. What do you think we're going to see in 2026? I

think we're going to see AI get even better. It's already extremely good. We're going

to see it having the capabilities to replace many, many jobs. It's already able to replace jobs in call centers, but it's going to be able to replace many other jobs. Each seven months or so, it gets to be able to do tasks that

jobs. Each seven months or so, it gets to be able to do tasks that are about twice as long. So for a coding project, for example, it used to be able to just do a minute's worth of coding. Now it can do whole projects that are like an hour long. In a few years time, it'll be able to do software engineering projects that are months long. And then there'll be very few

people needed for software engineering projects. All right, Jeffrey Hinton, thank you so much. We

really appreciate your time and we hope that people are listening to your warnings.

Loading...

Loading video analysis...