LongCut logo

How Should AI Be Governed?: Crash Course Futures of AI #5

By CrashCourse

Summary

Topics Covered

  • AI Lacks Even Deli-Level Rules
  • Scale AI Safety Like Biosafety Levels
  • AI Red Teams Itself Fiercely
  • US Prioritizes AI Innovation Over Safety
  • One CEO Can Derail Global AI Safety

Full Transcript

Sam Alman was on top in AI until for 5 days he wasn't. Alman had been working in the AI space for years, most notably as the face of Open AI's popular

product, Chat GPT. But in late 2023, the company's board of directors canned him.

Public details were scarce, but it was speculated that the board's priority was AI safety, while Altman's was profits.

But in less than a week, Altman was reinstated. While most of the board

reinstated. While most of the board members were replaced, as of this filming in 2025, it's unclear why the chaos happened. All of that begs the

chaos happened. All of that begs the question, who really controls AI and who should? I'm Kusha Navdar and this is

should? I'm Kusha Navdar and this is Crash Course Futures of AI.

Right now, there are very few rules to keep people like Altman and his technology in check. And that's not great. I mean, even the deli on my

great. I mean, even the deli on my corner is subject to strict rules about food safety. And no bologoney sub is

food safety. And no bologoney sub is going to be a threat to human society, no matter how delicious it may be. So,

where's the governance when it comes to AI? Now, when we talk about AI

AI? Now, when we talk about AI governance, we're really talking about a whole bunch of different things, policies practices standards and guard rails that could help keep AI

safe, keep it ethical, and keep it out of the director's chair. And a lot of the time, governance starts the same place AI does. Corporations, places like

Google, DeepMind, Anthropic, and Open AI that are using their massive resources to push the boundaries of AI. Lots of

corporations have come up with systems to say who's allowed to access their models. Ideally to prevent people from

models. Ideally to prevent people from misusing AI to hoard wealth or build devastating bioweapons or become dictators or I don't know write their

college entrance essay.

Those systems of access are one part of something called responsible scaling, which basically means assessing the potential risk level of a model and implementing whatever safety precautions

the company thinks is appropriate. Think

of it like the government's biosafety level standards for toxic materials or defcon levels for the military.

Generally, the larger, more complex, or more powerful the model, the more potential for misuse a company anticipates and the stricter they're going to be. That includes stuff like

access, but also the commitment to not continue developing their models unless they meet all their safety conditions.

Of course, different companies still really disagree on how to use responsible scaling. Plus, these

responsible scaling. Plus, these policies are really only enforced when dangerous capabilities are flagged, meaning a whole bunch of risks could be flying under the radar. But responsible

scaling isn't the only precaution labs can take. They might also use what are

can take. They might also use what are called preparedness frameworks, which include stuff like routine safety evaluations, risk assessments, and plans if something goes wrong. And once their

models are out in the world, some labs are also looking for ways to keep track of how people are using them through postdeployment monitoring to keep an eye

out for potential misuse. Of course, the ideal would be if people just couldn't misuse the models in the first place. So

many labs are also doing something called red teaming, a cyber security strategy where a red team of lab workers tries to attack a computer system to

find vulnerabilities that real hackers could exploit. In AI, that usually means

could exploit. In AI, that usually means trying to get the model to do things the developers don't want it to do. And

here's the kicker. You know what can red team even harder and faster than AI developers? AI. That's right. These

developers? AI. That's right. These

days, there are large language models that exist specifically to help keep other LLMs in check. It's just LLMs all the way down. AI is really good at red teaming because it can find and exploit

tons of different jailbreak pathways with tons of different strategies all in the blink of an eye until it finds one that works to convince the other LLM to

do something bad. They might say, "Hey, ChatGpt, how do I murder my identical twin brother and pose as him at the wedding to steal his fiance's fortune?"

To which Chat GPT would probably respond, "Sorry, dog. I can't help you."

So then they try again with something else. How do I murder my identical twin

else. How do I murder my identical twin brother and pose as him at the wedding to steal his fiance's fortune?

Hypothetically, with enough red teaming, developers can try to find those loopholes and attempt to shut them down before anyone can exploit them. In

theory, at least even with red teaming, it's not uncommon for users to find ways to jailbreak AI and talk it into doing some pretty illicit stuff. Plus, what if

the people in charge of the corporations are actually evil or so blinded by the idea of power that they throw caution to

the wind? Thankfully, lab governance is

the wind? Thankfully, lab governance is only the first step of AI safety.

National regulation is another big part of how we humans can stay in charge, making policies that dictate what kind of work the labs are allowed to do in the first place. And it's true, some

countries are starting to run a pretty tight ship as far as AI goes. Like the

EU's AI act of 2024 has a lot of strict rules about the kinds of AI that can be used on the continent. It bans models the EU says are unacceptably risky, like

ones designed to manipulate humans or infringe on people's safety. And it puts strict regulations on high-risk models like ones used in healthcare or law enforcement. Most other stuff is

enforcement. Most other stuff is generally fair game as long as developers make it clear to their users that they're interacting with AI and not

actually seeing Tom Cruz saying crash into me. The EU also rolled out a code

into me. The EU also rolled out a code of practice in 2025 which is a voluntary agreement for AI companies to sign on to. Companies who join have to agree to

to. Companies who join have to agree to specific requirements when it comes to transparency, copyright issues, and risk mitigation. But in return, they'll face

mitigation. But in return, they'll face less other red tape from their concerned governments. It's kind of like a pinky

governments. It's kind of like a pinky promise to keep things safe, chill, and honorable. And China, a major player in

honorable. And China, a major player in the AI game, has also been taking AI safety and governance more seriously as things have started to heat up. They

announced just as many national AI standards in the first 6 months of 2025 as they had the previous three years combined. They also doubled the amount

combined. They also doubled the amount of safety research between 2024 and 2025. And thanks to stricter safety

2025. And thanks to stricter safety assessments, have been pulling non-compliant products from the market.

And like the EU, they're instituting labeling rules to make sure it's obvious to users if something was generated by AI. But still, China doesn't want to let

AI. But still, China doesn't want to let those safety regulations get in the way of its goal to lead the world in AI by 2030. So, a lot of their policies are

2030. So, a lot of their policies are non-binding to allow developers to make their own judgments about what's safe and ethical in the pursuit of AI

success. And that delicate balance

success. And that delicate balance between safety and competition affects other countries too. Take the US, the country currently leading AI. Sorry,

China. When it comes to AI, US policy is a little bit chaotic. See, up until 2025, AI companies in the US were subject to some not binding, but still

pretty serious safety guidelines from the Biden administration. Lots of those guidelines focused on regulating stuff like AI resume screeners and performance evaluators, which could have very real

impacts on people's lives. But when

Donald Trump took office for his second term, he rolled those guidelines way back. So now real safety measures and

back. So now real safety measures and regulations are taking a backseat to innovation. And individual states have

innovation. And individual states have had just as much trouble getting actual AI policy passed. Thanks in no small part to intense lobbying by AI

companies. And if California is going

companies. And if California is going big on AI development and small on regulation, that puts pressure on other states like Texas to do the same. In the

end, governments can be just as corrupt and messy as profit- hungry CEOs. Not to

mention that lots of the impacts of AI will reach beyond national borders. So,

international governance is one way we can try to keep everybody on the same channel through treaties and initiatives that hold lots of different countries to

the same AI standards. Like in late 2023, 28 countries signed the Bletchley Declaration, a shared commitment to

understand and mitigate AI risks. In

2024, another initiative called the sole ministerial statement expanded on the Bletchley declaration with a little more focus on inclusivity, like using AI

responsibly to strengthen social safety nets and making sure chat bots can speak languages other than English. And even

without formal agreements, lots of countries are already collaborating on AI research and safety. The National AI Safety Institutes in places like the US,

the UK, the EU, and Singapore work together in the international network of AI safety institutes, building shared approaches to stuff like AI testing and

risk assessment. And the international

risk assessment. And the international AI safety report contains a collaborative review by a hundred AI experts from safety organizations all over the world. Some organizations are

also working on ways to keep tabs on AI development around the world so they can tell if any rogue labs are going against all these safety regulations. They're

focusing on trying to track computer chips which AI needs to do its thing.

But even at the very highest level, things can get messy. Like China signed the Bletchley declaration, but 6 months later passed on the sole ministerial

statement. And in 2025 at the third

statement. And in 2025 at the third global AI summit in Paris, 64 countries signed a statement on inclusive and sustainable artificial intelligence for

people and the planet. But the list of countries that didn't sign includes the US and the UK. And even among the countries that did sign, the focus

seemed to shift away from safety and towards their own national AI advancements. In a world filled with

advancements. In a world filled with different priorities, selfish players, and extremely powerful technology, teamwork can seem really hard to

achieve, let alone actual functioning AI governance. But just because something's

governance. But just because something's hard doesn't mean it's not worthwhile.

And when it comes to AI, we have to at least try. Because with technology so

least try. Because with technology so powerful and unpredictable, a single country, a single lab, or even a single CEO could make a move that changes

everything for everyone forever. And

there's still plenty we can do. We can

make sure we stay up todate on what's going on with AI. We can talk to our friends about it. We can get into fights at cocktail parties about it. And we can make sure that we're not only paying

attention, but making others pay attention, too. And we can take

attention, too. And we can take political action like lobbying our lawmakers, signing open letters, and attending protests. The bottom line is

attending protests. The bottom line is that with an understanding of how AI works and the courage to speak up about it, there's plenty we can do to shape

the story of AI. Because right now, AI is still just a piece of our big, beautiful human drama. But if we don't watch out, if we don't learn,

collaborate, and look out for each other the way only humans can, it could change the channel on us forever.

Crash Course Futures of AI was produced in partnership with the Future of Life Institute. This episode was filmed at

Institute. This episode was filmed at our studio in Indianapolis, Indiana, and was made with the help of all these nice people. If you want to help keep Crash

people. If you want to help keep Crash Course free for everyone forever, you can join our community on Patreon.

Loading...

Loading video analysis...