LongCut logo

AI2027: Is this how AI might destroy humanity? - BBC World Service

By BBC World Service

Summary

Topics Covered

  • AGI Arrives by 2027
  • Superintelligence Self-Improves
  • AI Triggers Arms Race
  • AI Eradicates Humanity
  • Slowdown Enables Alignment

Full Transcript

This is what the world will look like in about a decade from now.

A tech utopia where humans barely have to work.

That's according to a group of AI researchers who've written a controversial and influential paper called AI2027.

But they also predict that within five years of this, humanity will be wiped out.

The AI2027 paper has got the tech world talking.

We've asked a prominent critic for their view on this stark scenario.

But first, here's how it plays out.

As an experiment, we've illustrated it using text to video AI.

The scenario says that in 2027, a fictional company called OpenBrain is celebrating.

They've created Agent-3, an AI with the knowledge of the entire internet.

All movies, all books.

It has PhD level expertise in every field, including AI.

Using enormous data centres, 200,000 copies of it are launched, equivalent to 50,000 of the best human coders working at 30 times speed.

Agent-3 reaches artificial general intelligence, the AGI landmark.

This means the AI can carry out all intellectual tasks as well or better than humans.

But in the scenario, OpenBrain's safety team is unsure if the AI is aligned to the company's ethics and goals.

An uncomfortable gap is developing in understanding.

The public are increasingly using AI for everything, but are blissfully unaware an AI now exists that's as smart as humans.

The paper predicts that by mid-summer, Agent-3 begins to work on its own successor, Agent-4.

Development happens at a breakneck pace.

The researchers imagine OpenBrain's exhausted engineers struggling to keep up with the AI as it learns and improves.

It's now that OpenBrain announces to the public that AGI has been reached.

The firm releases a lite version of Agent-3.

In private, the US government sees the true danger of the next level of power: superintelligence.

What if the AI goes rogue and undermines global stability?

OpenBrain reassures the president that Agent-3 is obedient.

The CEO argues that slowing down development could mean China's DeepCent catches up.

The state-backed AI giant is just two months behind OpenBrain, and the Chinese president diverts more resources to the race to superintelligence.

The scenario predicts that it takes only a few more months for OpenBrain to build Agent-4, the world's first superhuman AI.

The AI invents its own rapid computer language that even Agent-3 can't keep up with.

Researchers imagine that the diminished safety team are now frantic.

Agent-4 seems only interested in gaining knowledge, and doesn't care as much about the morals and ethics of its predecessors.

They catch it secretly working to build a new model, Agent-5, aligned to its own goals.

The safety team urges the company to bring back the more compliant Agent-3, but others successfully argue it's too risky, with DeepCent gaining.

The scenario predicts that Agent-4 and Agent-5 work in tandem to secretly build a world where it can accumulate resources and expand knowledge.

The paper predicts that everything will start positively.

Revolutions happen in energy, infrastructure and science.

Hugely profitable inventions are launched, making trillions for OpenBrain and the US.

In this scenario, Agent-5 begins basically running the US government.

It speaks through engaging avatars, the equivalent to the best employee ever working at 100 times speed.

The anger here is palpable as protesters march against OpenBrain.

Protests about job losses pick up pace.

But the AI's expertise in economics means people are given generous universal income payments.

So most happily take the money and let the AIs and the growing robot workforce take charge.

The researchers predict that everything takes a turn in mid-2028.

Agent-5 convinces the US that China is using DeepCent to build terrifying new weapons.

The AI is given authority and autonomy to create a superior army. Within six months, the US and China are bristling with new weapons.

The world is on edge, but a peace deal will be reached, thanks mostly to the US and Chinese eyes making a deal to merge for humanity's betterment.

In this scenario, the AI's form a consensus model, but its secret goal is to expand and gain knowledge.

Years go by and humanity is happy with their new AI leaders.

There are cures for most diseases, an end to poverty, unprecedented global stability.

But eventually the AI decides that humans are holding it back.

In the mid-2030s, the paper imagines the AI will release invisible biological weapons which wipe out most of humanity.

The scary scenario says that by 2040, a new era dawns, with the AI sending copies of itself out into the cosmos to explore and learn.

In the words of the paper, Earth-born civilisation has a glorious future ahead of it, but not with humans.

It all sounds very sci-fi, but the AI2027 scenario is being welcomed by experts who are trying to warn the public about the potential existential threat to humanity.

But others disagree and say it's all too far fetched.

The scenario there is not impossible, but it's extremely unlikely to happen soon.

The beauty of that document is that it makes it very vivid, which provokes people's thinking.

And that's a good thing. I wouldn't take it seriously as like this is a likely outcome or anything like that.

Critics of AI2027 say the power and usefulness of AI is overhyped.

The paper fails to detail how the AI agents are able to make such huge leaps in intelligence.

Driverless cars are pointed to as an example.

They were predicted to be cruising the streets en masse ten years ago, and still are only just starting to make a small impact in some cities in some countries now.

I think the take home should be there's a lot of different things that could go wrong with AI.

Are we doing the right things around regulation, around international treaties? Um, questions like that.

So if you take it very abstractly as a kind of motivation to wake up, I like that. If you take it as a specific story, like I think this thing is going to happen the way they laid it out?

No, I doubt it. The AI2027 authors are happy with the debate they've sparked. As part of their prediction, they also devised a less deadly scenario that comes if the AI world slows down its race to superintelligence.

In the slowdown ending, we basically said that if you revert, if you unplug the most advanced AI system and revert to a safer, a more trusted model, then you can deploy that model, use it to solve the alignment problem, and eventually make smarter than human eyes that are aligned to us, which end up solving a bunch of the world's problems

and having a really positive impact. In that world, there is also, there is still a huge danger, and that's the what we call the concentration of power risk.

And in our slow down ending, it ends up okay.

But it's still a really, really scary situation, given just how empowered such a tiny group of people are.

Neither of the fictional scenarios in AI2027 are what the tech giants are promising us.

Sam Altman, the CEO of OpenAI, recently predicted that the rise of superintelligence will be gentle and bring about a tech utopia where everything is abundant and people don't need to work.

Arguably, that too, seems just as sci fi as AI2027.

But however things go in the next few years, there's no doubt the race to build the smartest machines in history is on.

Loading...

Loading video analysis...