LongCut logo

Here’s Our Roadmap to a Better AI Future

By Center for Humane Technology

Summary

Topics Covered

  • Breaking the trance of inevitability
  • Name the bad movies, pass the laws
  • AI is a product, not a person
  • The race to intimacy
  • Culture is upstream from politics

Full Transcript

[music] Hey everyone, it's Tristan Harris and this is Aaraskin. Thanks so much for coming to listen to your uninvited attention.

So many of you will have seen the AI doc by now. That's the new film that we just

by now. That's the new film that we just did an episode with the filmmakers. Um

if you haven't seen the film, there's still plenty of time to go see it in theaters and it's everywhere all throughout the US uh and soon to be hopefully in internationally. And Asa

and I are really excited about the work that this film can accomplish. Because

in essence, what we're trying to do is create clarity that will create agency that if everyone knows that everyone else knows that there's a problem up

ahead in the way that AI will land us in a future that nobody wants, if everybody can see that clearly, then we can collectively put our hand on the steering wheel and steer to a different future. And I think the question and the

future. And I think the question and the thing that the film leaves kind of unresolved is how do we steer? How do we get to that better future with AI? And

that's what we want to talk about today.

What are the actual steps that we can take today to prevent the worst case scenarios? You know, there's a spectrum

scenarios? You know, there's a spectrum of futures available to us. We may not be able to get to perfect. There's going

to be some damage. And also, we can still steer. There's still time for

still steer. There's still time for that. Mhm. And just to say like if if

that. Mhm. And just to say like if if you haven't yet seen the film, I think one of the things the film does very well is that scoops everybody up. It

really represents all sides not just like fairly but strongly. That if if you are really excited about the benefits that AI can bring. The film not only talks about those but points out that

most people don't go far enough in the benefits. And same thing on the the

benefits. And same thing on the the downsides. that really highlights the

downsides. that really highlights the downsides, highlights the AI race to deploy that is creating those like catastrophic risks and and then points out that actually most of the risks that

people think about aren't big enough.

And what I'm excited about for this episode is that when everyone sees that the direction that we're going is one that we're not going to want to live in,

whether you are like a a teenager who's not going to have a livelihood growing up, whether you're a teacher who's having to watch their kids sort of have cognitive decline all the way up to

you're the head of a major corporation.

Seeing the direction that this goes, I think gives us the opportunity to choose a different path.

So I think the one of the main problems is that this feels too big for any one person to solve. And Haza, you kind of speak to the this kind of scale metaphor of like okay the problem is this, you know, trillion dollar machine advancing

AI as fast as possible on the most reckless path. And there's this question

reckless path. And there's this question of like how would we change that?

Imagine the scale. What's what's

something on the other side of the scale that's equal weight?

So So imagine I just want everyone to like close your eyes for a second.

Imagine there there's a scale like a balancing scale. On one side you see the

balancing scale. On one side you see the problem and so this is like trillions of dollars of investment going into making uncontrollable unscrutable AI there sort

of the race for the one ring um geopolitical power like forever dominance that that's pulling the problem side down um and they're on the

other side just imagine there's you hearing about this problem and what is your reaction going to be well it's going to be like denial despair deflection So, what is the only thing really that

we could imagine that can shift those trillions of dollars of incentives?

Well, it's sort of like all of humanity.

It's a kind of like we're going to need a human movement that can balance out um th those those scales.

Now, it all starts with, you know, um first of all, just not feeling overwhelmed, right? That's kind of like

overwhelmed, right? That's kind of like one of the first steps that there is another path, but it would take a lot of people doing a lot of things.

Um, the second is that we have to break the trance of inevitability.

If on a subconscious level, you just feel like it's all over and it's just all going to be inevitable and there's nothing we can do, the problem with that belief is that it is co-complicit with

enabling that bad future to happen. And

so that change from believing something is inevitable and possible to change to believing that something is just extremely difficult and perhaps the hardest thing humanity ever has done.

Like that gap is critical because it means there's still something to do. And

so when I think about what is going to fight back against that, it's something the scale of humanity and human values writ large protecting the things that we care about. And so when you grayscale

care about. And so when you grayscale your phone and turn off notifications, that's the human movement. When you see a graffiti on an ad in New York City for a AI product that no one actually needs,

that's the human movement. When you see people gathering together for a dance party and you check your phones at the door, that's the human movement. When

you see people saying, "I'm going to learn a language instead of falling, you know, into brain rot doom scrolling, you know, at night." That's the human movement. And it's not just that,

movement. And it's not just that, obviously, it's about how we activate in the world. So when you know employees

the world. So when you know employees threaten to resign because they don't think that AI should be used for mass surveillance or we're not doing things safely enough and when you see that you know countries like Australia, Denmark,

Spain, France are all banning social media for kids under 15 and 16 and I believe several US states now are banning social media for kids under 15 or 16. That's the human movement. And

or 16. That's the human movement. And

already nine states have introduced bills to restrict AI personhood so that human rights are for humans, not for protecting AIs. And even 45 states have

protecting AIs. And even 45 states have specifically addressed sexually explicit deep fakes. And these laws send a huge

deep fakes. And these laws send a huge signal that non-consensual exploitation of AI tools is a serious offense and we have to actually take action on it. So

there's actually a lot that's happening and most people just don't see it. I

want everyone to just stop for a second because at least for me after I hear Tristan say all of those I feel something different in my body. I feel

like hope. I feel I feel energized. And

I just want you to hold on to that feeling because it's like that is the feeling that's going to enable us to make sure that AI the way it's being rolled out actually isn't inevitable.

And so this can be everything from like if you're really good at um doing international coordination track two dialogues bringing countries together.

It's not most people, but if you are, that's part of the human movement. But,

you know, it's also tiny little things like you're sitting on an airplane and you put down your phone so that you can smile at the baby, the seat behind you, and they giggle back. Like, that's also

part of the human movement. This is

about taking back what it is to be human, but not in the sort of abstract sense, but in the like everyday tangible sense all the way up to the international sense.

Exactly. And of course, what we're going to need ultimately are laws that are passed because you have to bind to these multipolar traps of if I don't do it, I'm going to lose to the other one that will. But we're already seeing that

will. But we're already seeing that happen. We're seeing several states work

happen. We're seeing several states work to pass bans for legal personhood for AI, meaning AI should be a product, not a person. Human rights are for humans.

a person. Human rights are for humans.

And we're seeing already US states move in that direction. Um, so this is not something that's hypothetical. We're

seeing liability laws for AI be advanced in several states. Um, we're seeing age appropriate design codes. If you

actually just got the iOS update on your phone, you'll notice when you open up, I think anthropic, it happened to me yesterday. You have to verify that

yesterday. You have to verify that you're above the age of 18. We now have age gating in every Apple device. That

was something that many of us have been working for over a decade to make sure that happens. So stuff that was

that happens. So stuff that was hypothetical that was, hey, we're going to need a big tobacco trial for social media and the engagement model. Isa, you

and I were talking about that in 2013.

It's actually happening. So,

it took 13 years for social media to go from this is never going to happen. This

is impossible to now it's finally turning around now. AI looks impossible, but just zoom

now. AI looks impossible, but just zoom back to where you were 13 years ago. It

also felt impossible then. And so,

there's a really important thing that everyone can do to be part of the human movement, at least in the US, and that is the midterm elections are coming up.

Um, we want everyone to research the politicians that you're going to vote for and start demanding that they take stances that are about well being part

of the human movement fighting back against like the encroachment of AI and livelihoods in uh surveillance and in every way that like things like encroach on us. That is one of the most important

on us. That is one of the most important things that you can do.

We have to make AI go from, you know, not even on the top five list of priorities for politicians who are looking to get elected, saying, "Imagine that their phone literally never stops ringing, and it's I'm not going to be

voted for until I know that you are going to stand for a prohuman future."

Whether that's how you're pushing on data centers, whether it's how AI is getting deployed in schools, whether you're protecting people's jobs and people's livelihoods in the face of all this AI disruption.

Yeah, exactly. Are you prohuman?

Are you pro machine? It's it's very simple and the AI doc I think makes that clear that the default path is not a prohuman future and if everybody sees that we can collectively choose both in small ways

and big ways. You're already seeing mass boycots of OpenAI's uh product and unsubscriptions because of the drama that went down between um the Department of War and Anthropic where the AI models

would would have been used for mass surveillance and autonomous weapons. I

think Anthropics uh downloads surged by like 250% or something like that. If

millions of people switch who they're paying for, we are voting with our dollars. And if businesses do that, if

dollars. And if businesses do that, if church groups do that, if families do that, if communities do that, that can have a really big impact on which world we're heading towards. One of the

challenges as you know Tristan of thinking about AI is that AI is automation of intelligence and intelligence has shaped and touches absolutely everything about our world.

Everything is touched by intelligence.

So everything is touched by AI which means that the scale of the problems it's just it's too much to hold in one head. And you know to say the phrase

head. And you know to say the phrase like you know if the the world is pretty good for machines is to start to invoke well that we've sort of seen this movie before and I wanted you to talk a little

bit about like this framing that we've started to brainstorm about actually the way that we can stop from living in the dystopian movies we've all seen.

Yeah. So let's just like rotate the entire problem from the lens of haven't we seen this movie before? like Alisium

or Hunger Games. You have this handful of trillionaires who live above the law where everyone else basically works and is kind of in poverty and kind of fighting and eating each other. And you

see that, you know, we have Wall-E where the the future where the fat humans are sort of caught in a doomcrolling loop, you know, getting more brain rot. So,

you know, attention spans being harvested or Idiocracy where, you know, you dumb [clears throat] down the population until there's nothing left.

So one way to think about solutions is we need laws and we need norms and changes in culture that prevent each of these bad movies. So instead of saying what laws do we pass, imagine there's

just like a no wall-ally law. So it's a set of laws that prevent the mass attention economy, brain rot, shortening attention spans, etc. It means AI that and technology that are designed to

protect human vulnerabilities and protect our freedom of mind, not be, you know, predated on on exploiting it. And

imagine instead of her, you know, her is a movie about AI companions where we Phoenix falls in love with his AI. Well,

we could have a prevent her law and that includes no anthropomorphic design, liability for suicides, and these kinds of problems. And where AI is designed as the outcome of that law to strengthen

human capacities and build deeper human relationships as opposed to redirect people from their human relationships and deepen the relationships with AI. Or

think about the the no bladeunner law or maybe the no replicant law. And that

says, you know, your legal rights are reserved for you and other humans and and for things in nature like and [clears throat] and that when human beings launch their chat bots um or agents out into the world that the human

being that did it or the corporation that did it are responsible. They're

held legally liable.

Yep. And that AI agents should have driver's licenses. So, if you're if

driver's licenses. So, if you're if you're an unlicensed AI agent that's doing havoc in the world, it'd be like a car that's swerving through the highways with no license plate on it. Well, I'm

sorry. You're going to go go to jail.

Um, and there's some simple other laws like no big brother or no 1984. It's

pretty simple. Don't create mass ubiquitous surveillance that can go all the way down to, you know, decoding every aspect of someone and reanomizing them. We need laws that prevent that

them. We need laws that prevent that kind of uh surveillance. Or the no HAL 9000 law from 2001 of Space Odyssey. You

know, open the pod bay doors, Hal. and

he says, "I'm sorry, Dave. I can't do that. We're actually building the AIS

that. We're actually building the AIS that are currently disobeying commands, avoiding shutdown, and we need laws that say you cannot ship AIs into sensitive infrastructure that we can't verify are

controllable." And so, this is not a

controllable." And so, this is not a partisan issue. There's essentially

partisan issue. There's essentially people who want the anti-human machine and don't mind if we basically disrupt everyone else's lives. And there's the people who want a pro-human future, and

that's what we want to invite people into. There is a movement for a prohuman

into. There is a movement for a prohuman future and we can all get behind preventing a bunch of these bad movies.

From Terminator to Alisium to Wall-E to Idiocracy to Replicans to Big Brother and to Hell 9,000. Just about now people are starting to think like, okay, that's wonderful at the highest level, but what

specifically concretely can we do? What

kind of laws can we pass right now? No

one solution can possibly solve a problem this big. It's going to take an ecosystem of solutions and an ecosystem of people. The forces that are moving to

of people. The forces that are moving to make this right have to exceed the forces that are moving for the antihuman machine future. And here I sort of want

machine future. And here I sort of want to turn it over to some of the specifics of what our policy team at center for human technology has been working on.

[music] Thanks so much, Asa. Um, hi everyone.

I'm Sasha Fegan. I'm the executive producer of Your Undivided Attention.

And I have with me here Josh Lash from the podcast team who's making his podcast debut. Hi Josh.

podcast debut. Hi Josh.

Hey Sasha. Thanks so much. I'm I'm

really excited to be here and I'm really excited for this episode. You know,

we've been trying to think of the best way to present some of the internal work that our policy team here at CHT has been doing behind the scenes, coming up with ideas for actions, concrete actions

that we can take right now to meet this moment in AI and to kind of respond to the challenge that the film throws down for all of us to to build a movement to steer the direction of AI towards a more

humane technological future.

Yeah. So joining us uh now we've got Camille Carlton who's the policy director here at CHT and Pete Furlong who is our senior policy analyst and together with the efforts of a lot of

other team members at CHT they've just released a report called the AI roadmap how we ensure that AI serves humanity and you can find it on the CHT website

and also in the show notes. Yeah, and

we're not going to go into the whole thing today on the show, but we really wanted to highlight some key parts of the report because it does something really rare that I haven't seen anyone else in the space do yet, which is that

it doesn't just stop at identifying the problems that we're facing. It actually

has this clear vision for the AI future that we want and it has a roadmap to get us there. So, to tell us more about this

us there. So, to tell us more about this report and to get you all, our wonderful audience, engaged in what needs to happen next, here are Camille and Pete.

Welcome to your undivided attention.

Thanks for having us.

Yeah, thank you for having us here.

So, this report's coming at a time when so much of the conversation around AI is kind of couched in this very deep unmovable feeling of inevitability.

There are a lot of concerns about the negative effects on our kids, our classrooms, our relationships, and even um early fears uh but big fears around how it's starting to impact the

employment market and particularly white collar jobs like computer scientists.

It's all starting to feel like this is just inevitable. But what I think um I

just inevitable. But what I think um I get from reading this report is that it's actually not inevitable and that we can shape the direction of AI. So

Camille, how do we do that?

Yeah, I mean to start first, the feeling of inevitability is so understandable, right? The scale of the problem we're

right? The scale of the problem we're facing is massive. AI touches so many aspects of our lives. But this feeling of inevitability is also probably one of

the worst things that could happen to us as a society because we stop believing that we have agency and we stop believing that a different path is possible. And there is not one single

possible. And there is not one single solution that can solve this. No one

solution will ever be enough. But it's

important that we see that there are solutions, right? There are concrete

solutions, right? There are concrete steps we can take to steer us off the path we're on and towards a better future. And of course, change builds on

future. And of course, change builds on top of change, right? So small winds are kind of like snowballs that can eventually turn into an avalanche of positive change. But before we steer, we

positive change. But before we steer, we also need to figure out where exactly we're going. And that's why for us, our

we're going. And that's why for us, our report really starts with seven principles for how AI should be built and deployed and used, right? Principles

that give us a clear vision for the future we want to end up at. Um and so we really think of the report as like a roadmap for how we get there.

Yeah. And I think before we dive into these individual principles like what is that vision? What does a humane future

that vision? What does a humane future look like?

I mean a humane future means different things to different people and we really try to incorporate the range in which AI touches on so many different parts of

our lives. So we really imagine a future

our lives. So we really imagine a future where there's clear accountability for the harms of AI products. Where AI

elevates our human ability rather than replacing it. Where human identity and

replacing it. Where human identity and empathy is respected, not bought and sold. We imagine a future where AI is

sold. We imagine a future where AI is used to supercharge democracy and rights instead of concentrating power in the hands of a few companies, a few

individuals, and where the capabilities of future AI products are transparent and their kind of strict laws and lines about how we want AI built and used.

It's a future where the power of AI products and the people building them are matched with wisdom and responsibility. And frankly, it's just

responsibility. And frankly, it's just not the future we're headed towards right now.

Yeah. I mean, that's that's the sense I get from hearing the principles that so many of them really just seem like common sense. You know, of course, we

common sense. You know, of course, we don't want to build machines that replace us. Of course, there should be

replace us. Of course, there should be accountability and reasonable limits.

And you know, absolutely. I I think everyone listening to this would think that we need to protect things like dignity and democracy, but it really doesn't feel that we are headed to in

that direction. And so we do need to

that direction. And so we do need to repeat those things and articulate those principles.

I mean, like you could think in a show like this, we might be talking about small design tweaks or like wonky policies, but we're we're really talking about the things that give our lives meaning, right? Like our relationships,

meaning, right? Like our relationships, our jobs, our freedoms. Yeah. And I think that because AI

Yeah. And I think that because AI touches so many of these areas, it's forcing us to really, you know, as a species ask these big questions about

what we value in life and what type of future we want to see. So the broadness of the report is in fact really kind of comorate to the task at hand in the fact

that we are all reckoning with all of these different parts of our lives at once.

Yeah. Yeah. And I think we wanted to root this report in the future that people want, not the one we're being sold by a limited few AI companies. And

I think it's important to recognize that there's broad support across the public and across political divides for many of these ideas. And that's something that's

these ideas. And that's something that's reflected in a lot of the examples that we give here. So I think we started first by identifying like where's the current path that we're on and what's

the problem with that trajectory. uh and

so really just trying to get a good sense of the problem that we are trying to solve and then thinking about like what's the future that we want. So

what's the alternative here? Uh and

that's kind of really where we think about building up this principle from the ground up and so what are the steps that we need to take to get there? What

are the cultural norms that we need to change? What are the laws that we need

change? What are the laws that we need in order to better regulate AI? What are

the design changes that we need? So, how

do we change the way that this technology is built? And I think it's important to recognize that these, you know, these aspects, norms, laws, and design, they all kind of work together.

Uh, and they're really mutually reinforcing, right? So, shifting

reinforcing, right? So, shifting cultural norms strengthens the public's demand for more durable legal protections. And laws are, you know,

protections. And laws are, you know, something that create accountability that drives safer product design. And

you know when we see better designs and safer product designs that shapes the public experience of these technologies.

So these are things that really act together. Um and together is kind of

together. Um and together is kind of where we see the outcomes that we want and build towards that better future.

Can you give us an example?

Yeah. So I think one of the examples that's really important from this report is that right now there's really no clear legal mechanisms in place to hold AI companies accountable for the harms

of their products. And this is a really important problem. People are actively

important problem. People are actively being harmed by AI systems and we can expect those harms to grow as AI becomes more deeply embedded in our day-to-day lives. So that's the problem and I think

lives. So that's the problem and I think the solution that we want to build towards the better future that we want is that really in an ideal world companies should be taking into account

our safety in the design of these AI products and I think you know when something does go wrong whether that's one of the many cases of AI enabled

psychosis or suicides that we've seen or even you know an AI agent deletes your entire company's codebase which is a real example that we've seen uh the company that puts that harmful product

out into the world needs to be held accountable.

So, okay, that's the problem. That's

where we want to get to. And so, to get there, we need to shift norms, laws, and designs. Like, let's start with norms.

designs. Like, let's start with norms. What what are the norms we need to shift? How do we need to shift the way

shift? How do we need to shift the way we think about AI?

So, one of the norms that we agreed upon, for example, was that AI is a product and therefore carries product liability. We need to stop thinking

liability. We need to stop thinking about AI as a service and start thinking about what it is. It's a product, right?

So, just like with any other consumer product, the people building their product have a clear duty to their users to make that product safe. And if they fail to do so, consumers deserve

accountability. And this is something

accountability. And this is something that we've actually seen AI companies challenge uh both in court and in lobbying and in legislation, right? Um,

so the argument there is that AI outputs are a form of speech. And so

fundamentally underpinning this argument that companies are making is the idea that it's not a product. This paradigm

that we have and we've used for centuries around product liability doesn't apply to AI. And that's kind of the argument that AI companies are making in this case. Um, and something

that we think is deeply problematic. One

of the other norms that we talked about here was that responsibility for these products should lie with the companies, not just the people who use them. Um,

companies are sort of advancing this narrative that if someone's harmed by an AI product, that's on them. But I think it's important to recognize that many of

the harms we're seeing are a result of how these products are designed. I think

also Pete, one of the things that you and I have talked about with the norms that we've outlined here of, you know, AI is a product and companies are responsible for harms, not users, is

that they are direct counters to the narratives that tech companies have been putting out for decades. We've had huge companies putting out narratives that kind of shift the way we think about

them, their products, their responsibility, our role in using their products. And that changes, you know,

products. And that changes, you know, how we as individuals behave. It changes

how we regulate. And so knowing that, okay, there's actually a different way to look at it is part of the process of getting us to kind of the better path we want to go on.

Yeah. Exactly. And so, you know, we expect car manufacturers to install seat belts and airbags, right? Why can't we hold AI companies to a similar standard?

And I think it's important that companies take reasonable steps to mitigate risks in the design of their product. And this is something, you

product. And this is something, you know, when we talk about laws that reinforce that norm that we actually have a policy framework here at CHT that goes into much more detail on this and

we can link to that in the show notes.

We also have seen, you know, different states as well as a federally proposed bill uh the AI lead act uh which seek to define AI clearly as a product in

legislation. Uh so there's kind of a

legislation. Uh so there's kind of a number of different approaches to trying to address this.

Pete, do you have a sense that this is there's bipartisan consensus on this?

Yeah. So the bill we've seen introduced at the federal level is sponsored by senators Durban and Holly. Uh so it has bipartisan co-sponsors. We've also seen

bipartisan co-sponsors. We've also seen bills kind of adopting the same strategy across red and blue states. Um, and I think, you know, part of the reason that

this approach appeals in a bipartisan way is that it's pretty common sense, right? The nice thing about it as well

right? The nice thing about it as well is that it's pretty flexible. We don't

need a lot of really prescriptive regulation when we have this form of like embedded accountability. Uh, so I think that's something that appeals to folks on both sides of the aisle.

Yeah. And I think that's something you see throughout this report is that so many of these issues are truly bipartisan. And I I just think I think

bipartisan. And I I just think I think that's that's you know rarity these days and I I really love that about it.

[music] Let's move on to another one of the principles um which really struck me which was around the idea of we need AI that respects our humanity and doesn't

exploit it. So can you just uh get into

exploit it. So can you just uh get into that a little bit more and explain what you were getting at there Camille?

[snorts] Yeah, definitely. And that I mean this

Yeah, definitely. And that I mean this is something that I think we hold really closely at at CHT um given the work that we've done supporting different litigation cases. Um but the the problem

litigation cases. Um but the the problem that we're really seeing here is that AI companies right now are treating users like commodities, right? Because the

personal data that we as users provide these companies about ourselves, our innermost thoughts, our feelings as well as our interactions with their products

is incredibly useful in building and improving AI models. Um, in fact, leading investors and companies openly describe this as a quote magical data

feedback loop where intimate user interactions are continuously improving the product. I mean now that sorry I'm

the product. I mean now that sorry I'm just going to say I just want to double hit on that because that is shocking actually to hear that you know um that really we're just vessels for data

extraction you know it's it's so debasing on a human level and this isn't the first time that users are the product right we've seen this before with social media and the race to

attention it was very clear in the advertising model and now it's gone even a level deeper right it's really this race to intimacy where companies are

designing products to look and feel human. They use human speech patterns.

human. They use human speech patterns.

They speak in first person. Um there's

even a little ellipsis to indicate that these products are thinking. Um

sometimes depending on the product itself, you might even hear a backstory about the AI that you're talking to. Um,

and so there's kind of this again intentional design to mimic our humanity. And not just that, it's it

humanity. And not just that, it's it goes beyond that because there's some things about these AI products that aren't human, right? They're always on,

they're always available, but they also always kind of validate your beliefs, even if it's not in your best interest.

There's just generally this sense of kind of the product will do whatever it can in order to keep the user in conversation. And why? Because the

conversation. And why? Because the

bigger the model, the smarter a model, the more likely a company is to kind of make it to market dominance um to get to profits.

Yeah. And I and I think those profit incentives are are clearly there, but how do we change that? What's a what's an example of how we change those norms, change the design and also change the

laws?

So, one big norm here that we have is pretty simple, but it would have I think really big impact. It's the idea that we shouldn't humanize AI. When we think

about AI, we need to really clearly preserve the boundary between what is human and what is a machine. And you

know this goes into product design like the things that I was saying about how the products are built to be in first person. Um but humanizing AI also goes

person. Um but humanizing AI also goes beyond product design. It's also about not humanizing AI in our legal system by granting it legal personhood which is

something that companies have been pushing for. Granting an AI legal

pushing for. Granting an AI legal personhood would not only limit accountability from AI companies, but it would really tip the scales between AI and humans when it comes to legal rights

and protections.

Wait, sorry, can I just can I jump in here? AI like legal personhood, this is

here? AI like legal personhood, this is the thing that's being considered.

Yeah. So when we worked on the character AI case, character AI essentially argued that the case should be dismissed

because their product outputs should be considered protected speech. So the text coming from the chatbot should be considered protected speech under the

first amendment. And now they argued

first amendment. And now they argued this in a backdoor manner using kind of user their listeners rights. But the

implications of this of extending first amendment protections to a chatbot would be kind of the beginning of what we call legal personhood, which is something

that corporations already have. But the

implication would be really different, right? Because it shifts accountability

right? Because it shifts accountability away from the company into the chatbots, the products itself. And when you think about how to operationalize this, it

kind of gets sticky, right? you have

someone who has been harmed and suddenly they think that you know you're suing a company for the product that they made.

But if suddenly you're not suing the company, you're suing the chatbot itself, how do you change the chatbot's behavior? How do you receive damages

behavior? How do you receive damages from the chatbot? And so it creates this kind of liability shield for companies if we're looking at a world in which

legal personhood exists.

Yeah. And and it just strikes me as you're saying this like this is how these ideas build upon each other. We

just talked about accountability and product liability, but this is another level of liability and accountability that we need to be aware of and thinking about. And I personally don't want to be

about. And I personally don't want to be on the same legal footing as an AI chatbot. Like that seems like a really

chatbot. Like that seems like a really bad idea. Uh and that anyway, I'm sorry.

bad idea. Uh and that anyway, I'm sorry.

Keep keep going.

I I was just going to add I think it's like important to recognize that, you know, in the push for legal personhood, right? This is also connected to product

right? This is also connected to product design as well too, right? And so all of these things are interconnected, right?

When we talk about humanizing AI, right?

These companies are building these products to reflect our humanity, right?

And so that's a design choice on their part as well and it connects to their legal strategy.

Yeah. Yeah, and I think that's so important and definitely um you know, Camille, you mentioned uh the character AI case which CHT worked on which just to remind listeners was the case of um a

14-year-old boy Setszer who took his own life after a very intimate relationship with an AI chatbot. And we also worked on the Adam Rain case which had a

similar trajectory of a young boy taking his own life out of a relationship with Chhat JPT. And as you said Pete these

Chhat JPT. And as you said Pete these cases could have turned out so differently if the products were designed differently.

Yeah exactly Sasha and we should note that in the report itself there are design standards that AI companies can turn to if they want to build their chat bots better in accordance with this principle. And we should also note that

principle. And we should also note that there are states like California, Oregon, and Utah that are considering bills that would instantiate some of these design standards into law. So

there's real momentum on this issue.

[music] I want to move on to other harms which are really evident out there in the zeitgeist. Um, and that relates to the

zeitgeist. Um, and that relates to the impact of AI on jobs and on particularly the the potential automation of work.

And so we do hear a lot of stuff about how AI is going to put massive amounts of people out of work. So I I want to press you guys. What can we do about that? What does the report say about AI

that? What does the report say about AI and jobs?

Yeah. So I mean I think the north star that we're striving for here is pretty simple, right? So, we believe that AI

simple, right? So, we believe that AI should be built to augment human labor, not replace it. Um, and I think, you know, you're right, Sasha, that today's AI systems are built with replacement in

mind. Trillions of dollars are being

mind. Trillions of dollars are being poured into AI companies because only masscale automation of our economy could make that investment worthwhile. And I

think no one really seems willing to play the tape forward and understand and and imagine what this means for all of us, right? Um, but we believe really

us, right? Um, but we believe really that it should be a fundamental principle that people deserve access to work, they deserve a living wage, and they deserve economic security. Uh, and

that they should have a seat at the table when decisions are being made about technologies that will impact their core livelihood. Uh, and so really this requires all of us and especially

the people building artificial intelligence to rethink our beliefs about AI and work. Uh and so the goal of improving efficiency, the goal of

adopting new technology should be to improve the lives of people, right? An

AI that displaces workers or devalues labor is undermining the very systems that we have in place to support people.

Uh and that's not something that we want here. And then also I think that we need

here. And then also I think that we need to recognize that work provides more than economic value to people. It also

provides meaning and purpose and that to lose work entirely, even if we found a way to provide people with the safety net, would strip people of a lot of what matters to them. Yeah. I mean, this is a

topic we we've covered a lot on this show. Um, I actually would highly

show. Um, I actually would highly recommend our episode with Michael Sandell, who has written a lot about the importance of work to human dignity and human meaning. And I agree with

human meaning. And I agree with everything you just said. But again, I'm just struck by the fact that the incentives we have today are not pointing in this direction. It's so much

easier for companies to treat labor as a line item and to see automation as a way to just boost profits. So, we've talked about norms. I agree we need all those norms, but at the end of the day, what

are the laws that we need to start thinking about here?

Yeah, I think it's important to recognize here that this is a really complex problem. Our economy is a

complex problem. Our economy is a complex system and there's no silver bullet policy that's going to change the incentives at play here. So I think instead really what we need to be

thinking about is a platform of approaches and a platform with different policies. Uh and so this could look like

policies. Uh and so this could look like a tax system that's designed to prioritize spending on labor over replacing people with AI. Um, we've also seen, you know, different economists

propose things like apprenticeship programs to help with workforce development. Uh, and I think, you know,

development. Uh, and I think, you know, the other thing that's really important here is we need to make sure that we reinvest some of the gains from artificial intelligence towards helping

the people that are displaced by it. And

so really, this means that leading AI companies need to help subsidize some of the reforms we're talking about here.

Are we seeing politicians start to think about these laws? Are they at all responsive?

Yeah, I mean, I think it's something that a lot of different folks on both sides of the aisle are starting to consider. We've seen a number of

consider. We've seen a number of different bipartisan proposals at the federal level to do some better research, uh, so the federal government can understand the impact of artificial

intelligence on our economy. Um, I think it's something that we can expect to be a pretty frequent talking point as we approach some elections later this year.

So, you know, I recognize the economy is something that everybody cares about, right? And so, if this is going to be

right? And so, if this is going to be one of the biggest impacts on the economy that we're going to see, uh, then politicians on both sides of the aisle are going to have to take action.

Yeah. I just think it's it's worth emphasizing what you said earlier, which is the the way to justify the trillions of dollars of economic investment that

you're doing is widescale automation.

Like, that's the plan. Whether or not they're successful is up to us, right?

But that that's the plan.

That's exactly right. And this is something that we've even seen a lot of the top AI CEOs admit, right? Like

they're saying that their technology can replace a lot of the different jobs that we have, but they're not really proposing a solution to that. They're

just warning us, right? And so I think this is really important and something that needs to be addressed.

[music] So, one of the things that I I really appreciate about all the things we've been talking about today is you don't just focus downstream of the technology,

you know, how we should regulate it once it's out in the world, but you also look upstream at the folks building the technology and you offer design standards. I really appreciate that. And

standards. I really appreciate that. And

we talked earlier about how new laws will ultimately influence design, but that takes time and effort. And one of the things that I worry about with those design standards is that AI AI products today, the way they're designed is

totally like opaque. Like we have no idea what's going on inside these labs.

And even the people building these products often don't have any idea of what's going on inside the products.

There's this whole field of mechanistic interpretability that's dedicated to this. And so, you know, given all that,

this. And so, you know, given all that, how do you enforce design standards? I

mean, I think that this is one of the the big kind of focus points of the report, right? The massive asymmetry

report, right? The massive asymmetry between what companies know and what the public knows. And to your point, Josh,

public knows. And to your point, Josh, that many of the companies themselves can't fully explain why their systems behave the way they do. And so we have

that combined with competitive pressure to shorten testing cycles, release products that could still be considered risky, um, where we don't actually understand the risks and silence

employees who might raise concerns, we need a much more proactive approach to AI safety and AI transparency. Instead

of kind of playing whack-a-ole with safety where we release a product, harm happens, and then we go back and say, "Okay, how do we figure out what this thing was and how do we fix it, it's about, you know, demonstrating safety of

products before they're put in the stream of commerce?" And then on top of that, you know, this fundamental principle of rebalancing the information asymmetry between companies and the

public, right? So transparency really

public, right? So transparency really enables informed decision-m by the public, by policy makers, by businesses and this creates faster feedback loops

that help us see around corners with AI, anticipate harms, mitigate them.

These are not shocking asks. We have

this kind of transparency and safety and testing for every other high-risk industry. It's in nuclear energy, in

industry. It's in nuclear energy, in even in medicine, in aviation. companies

accept that they need to be transparent and there needs to be some kind of external system of safety testing that they can be held to. But for AI, how do

we actually get there?

Yeah. Well, to to your point, Sasha, one of the norms that we talk about is AI companies can't grade their own homework, right? And this is the

homework, right? And this is the situation we're in right now. We need

independent oversight so that we know these products are safe before they're released. And this is just not the case

released. And this is just not the case in this industry despite being the case in many other consequential industries.

Yeah. And I think you know when we talk about laws right it's important that we establish clear standards for pre-eployment safety testing for these products. And these are you know safety

products. And these are you know safety standards that are rigorous and ongoing and not something that can just be viewed as like a checkbox or a rubber stamp.

Um I think it's important that we also have things like audits and certifications.

uh we've applied these regimes to to banks and financial systems as well as just for consumer product safety and I think really importantly we need to protect whistleblowers at these

companies and allow them to step forward when they see something that's going wrong. Uh and this is another area where

wrong. Uh and this is another area where we've already seen some real momentum.

Uh we've seen laws passed in New York, California, uh and Colorado trying to address some of these aspects. We've

also seen Senator Chuck Grassley introduce a bipartisan AI whistleblower protection bill that would provide nationwide protection for AI whistleblowers. Um, and I think it's

whistleblowers. Um, and I think it's also important to recognize that there's a lot of things that we could be doing on the design side as well. Um, but I think just, you know, for the sake of things here, uh, we'd recommend folks

turn to the report for that. Um the

tricky thing is is as you were talking I noticed the momentum that you mentioned in your California and Colorado. It's

state momentum. Aren't we getting a sort of different patchwork of things that's really uninforceable with companies being able to do different things in in different states? I mean how do we get

different states? I mean how do we get that on a federal level?

Yeah. So I mean I think it's important to recognize the benefit that both states and federal legislation provides, right? So states can respond really

right? So states can respond really quickly um and they have you know more visibility and responsiveness to their

constituents um at the state level. uh

but you know the advantage is federally uh we can adopt something that protects citizens across the country right and so we need both and it's important that we

have both approaches but I do think it's important at the end of the day that we do see some sort of federal standards here I also I want to flag for listeners that

this idea of you know a patchwork approach has been a concept that has been really weaponized by companies and they have used this concept to push for

things like the AI moratorum and to stop any sort of progress on um regulating AI companies.

Yeah. And Camille, just to jump in here and remind folks, the AI moratorum essentially was a legislative package that was pushed um by the technology

industry this past summer. Uh, and the goal of that was to try essentially and preempt all state AI regulation with nothing else.

Right. Right. And so what it would have done is basically say states cannot regulate AI at all. Yet we have no plan at the federal level to do so. And would

I be right in thinking that part of the sort of larger part of that argument with if we do this this will hurt the the competitiveness of AI companies visav China which would be a terrible

thing for American national security, economic security um and so on.

Yeah, I I think that this was one of the really big narratives pushed by tech companies. But if you do just a little

companies. But if you do just a little bit of digging into it, you see that, you know, the majority of legislation being introduced at the state level is about regulating things like AI chatbots

for example. And if someone can explain

for example. And if someone can explain to me how this AI chatbot is helping in our race to China, then you know, let's have this conversation. But there's a question of whether or not the type of

innovation we are seeing from our leading AI companies is actually supporting American exceptionalism, you know, American kind of leading in R&D

and science and innovation or if we're just seeing kind of products being put out really without a purpose.

Yeah. What we're racing, but what are we racing towards?

Yeah. And I think you know the goal there, right, is that we should be racing towards safe products, right?

[snorts] That's something that benefits all of us.

One thing I do want to press you guys on just before we wrap up is um what comes first really like if you could say give

me one thing that you think we really need to change right now and that everything else, you know, that that the dominoes would kind of line up afterwards and it would be really impactful and high intervention. what

would it be? And I I know they might not be the same thing. So, Pete, do you wanna do you want to kick us off?

Sure. Yeah. I mean, I think a really important thing for me is ensuring we have clear lines of accountability. Uh,

and I know it's something we talked about at the top of the podcast here, but I truly believe that that's foundational to a lot of the change that we hope to see.

And how about you, Camille?

I think for me, it's kind of the opposite side, right? It's kind of ensuring that we have the rights and protections we need for people in place.

So it's like we both need to increase accountability for tech companies and then at the same time increase the protections we have whether these are

protections around labor protections around privacy um looking at those two things handinand I'd also just add that the midterm elections are coming up right and we can

expect AI to be an important aspect uh of this election right and so I think it's worth focusing on the political influence uence of the technology

industry. Um, and it's worth folks

industry. Um, and it's worth folks understanding where their candidates stand on these issues. We just heard Kristan and Asa talk about how what we

need is a human movement, a movement that really comprises all of us. Um,

because that's the only thing that's going to that's going to balance the scales. And the conversation we've been

scales. And the conversation we've been having today is great and it's concrete and I I think people are going to really love it. But I also wonder if people are

love it. But I also wonder if people are going to feel a little excluded from it if they're not sort of having their hands on the levers of power if they're not actually building the technology or you know passing these laws. So I'm sort of left with this question of like and

I'm sure the audience is too like what can I do to make this happen is what can they do our audience especially if they're not a policy maker or a technologist.

I think for me one of the biggest things to hold for people here is that culture is upstream from politics,

right? Because if we change our norms

right? Because if we change our norms and we change our culture, it changes how we build products, how we design products. That is paradigm change. And

products. That is paradigm change. And

so to me, people understanding that again they have agency to shift things by kind of changing the way we view the world is

important. And then, you know, baby

important. And then, you know, baby steps right?

Yeah. And we all have the ability to affect change. And we've seen the way

affect change. And we've seen the way folks like Megan Garcia and the Rain family have stepped up and spoken out about their experiences with harms.

We've also seen parent advocacy groups speak up uh and you know try to push for change in terms of policy. But then we also see the impact that schools have

and teachers um and folks across really all aspects of our life.

Yeah. For me as a parent with kids in high school, I mean we just had a meeting at our high school with the parents and citizens association about the use of AI at school. So it's also stepping up and trying to have a shaping

role and bring some of this knowledge into those discussions at a local level, at a municipal level. Um because the more that happens, the more we are actually driving that cultural and norm

shift. You could be the voice in your

shift. You could be the voice in your family who really brings these conversations to the dinner table and be the go-to person in your network who understands these harms and can advise

people in your network around, you know, how they can use AI safely and and also where the line between what their individual responsibility should be and

where we need to actually pressure our legislators to take federal or state responsibility. And we need that help to

responsibility. And we need that help to to externally enforce standards and and safety measures.

I think ultimately like you said Pete, this is going to touch every aspect of our lives. And so we are we all have a

our lives. And so we are we all have a part to play in this. I mean you can at work talk to your HR person about the AI that you're implementing in your systems and ask about what are the safety standards that you're applying there,

what are the privacy standards that you're applying there. You can go to a town hall and you could say, "Hey, I'm really worried about what AI is going to do to my job." and see what they have to say about that. And I I'm reminded of

the quote that Tristan often uses in these podcasts and it's something that I' a quote I've always loved which is the Margaret me quote which is you know never doubt that a small group of thoughtful committed citizens can change the world. Indeed it's the only thing

the world. Indeed it's the only thing that ever has and I it's you know it's true. It's only going to come from us

true. It's only going to come from us and we have to do we have to step up and do it. And I I think what I would also

do it. And I I think what I would also offer to listeners is we have really seen the power of individual action with social media. We have seen parents

social media. We have seen parents marching on Washington. We have seen people putting their phone on grayscale.

We have seen people take action and it took a long time to get there. But where

we are with AI is people understand the harms way faster than they did with social media. And so we're kind of at

social media. And so we're kind of at that point of we're ready. It's the time and place for people to come forward.

And that same kind of trajectory of change that we've seen from social media can happen with AI as well.

We just covered a ton and that's only four of the seven principles in the report. Uh so I really encourage people

report. Uh so I really encourage people to go read the whole thing. There's a

lot more detail in there, but it's it's very readable. Pete, Camille, thank you

very readable. Pete, Camille, thank you both so much for coming on today. Uh a

lot of food for thought and I'm I'm really excited to get this out into the world. Thanks for having us.

world. Thanks for having us.

Yeah, thank you so much.

Loading...

Loading video analysis...