LongCut logo

Anthropic’s CEO explains why he took on the Pentagon

By The Economist

Summary

Topics Covered

  • AI Models Unsuitable for Autonomous Weapons
  • AI Empowers Private Actors Beyond Governments
  • AI Lacks Human Oversight in Drone Armies

Full Transcript

On fully autonomous weapons.

The logic, as I understand it, is not so much about democratic values.

It's more that you think the technology is not ready for your models to be used fully autonomously.

Is that is that right?

So you you have a different view of the capabilities of the technology, perhaps in the department of.

Yeah, I think there's actually two things.

One is this, this issue that you just mentioned, which is you can think of this as like a supplier of aircraft who says this aircraft is not safe, default to fly above a certain, you know, altitude or to turn at a certain speed.

We didn't make these systems to be safe.

If you use them in that particular way.

Right.

They're not suitable for this use case.

Like we haven't made them or manufactured them to, to to work well for it to be safe for this use.

And in that.

Was there anything specific I mean your your model has been used was used in Venezuela.

It's being used in Iran.

Is there anything in these existing use cases that is more so.

One thing I want to say is, is we started with with a substantially, more limited contract that enabled a more limited number of use cases that I think is in force, maybe even until today.

This was all about negotiating a new contract.

So we had even a substantially more limited, contract for, for quite a long time and that more limited contract, you know, they didn't run into problems with it on the ground.

So none of this was driven by we have concerns right now about what the Department of War was doing. Right?

Iran was is fine and Venezuela was fine.

You know, again, you know, it's I'm not expressing opinion either way on military policy or government policy.

I'm just I'm just saying if you.

Use of your model. Yeah, you.

Can if you if you provide a service, you don't get to decide where that service is used.

There's this operational side of military decisions course.

We don't have any say about that.

What we do have say over is this, this question of like very broad use cases, right?

It's not about any administration.

It's not about any policy.

It's like, how should this technology be used in general?

I actually think there's a lot of value here.

And, you know, I want to make sure the Department of War understands this as well.

This isn't about concerns we have about actions they've taken already, like we had a much more limited contract.

We never had, you know, concerns about this contract.

This is about, you know, this is about going forward, how should we think about the uses and the governance?

I think going forward, their perspective would be and they would probably use the analogy of the fighter jet or something.

And they would say, listen, we have to have suppliers who we can trust, and they provide us with tools that we can use in combat, and we can't have some guy in Silicon Valley suddenly saying, no, you can't use it for this fact or the other.

And their view would be they need to have full operational ability to use it, and you are somewhat capriciously in that view, putting all of these constraints on it.

Yeah.

So so I would say we agree with them.

We don't want to do that.

We agree on 99%, not only the contract we were operating on together, we agree on like this much broader set of things that that, you know, we're haven't even done yet or are just starting to do.

So, you know, I just want to say again, like, there's so much more agreement here than there is this, I.

Mean, listening to you, it does seem to me extraordinary that, you know, you end up in this situation where you're going to be taken out of, all work with, with the Pentagon.

That said the other, because I've been thinking about this quite a lot.

The the conclusion, the logical conclusion of total control by the Pentagon and no ability for the supply to do anything is surely some sort of effective nationalisation of those AI companies that work with the Pentagon.

Do you think that's a risk?

I think over the coming years, we're going to have to have a discussion about the role of AI and the government.

You know, AI is becoming something that has implications at the level of all of humanity, not, you know, for national security in the sense of the contest between nations for national security, in the sense of, you know, can individuals use this technology for destruction, for national security, in the sense of are the models themselves a risk to national security

for economics in terms of their impacts?

And it's it's not about one particular administration.

It's not about one particular department.

It's not about specific military operations.

Anyone who thinks that this is what that's about, that's that's that's a misunderstood this is this, this, this is bigger than that.

This is big.

This is about the next administration, too.

This is about, you know, Congress when there's different Congress people, this is about, you know, a different geopolitical situation that we're in than the one we're in now.

It's it's, you know, we should not make it about what's happening right, right now.

And the thing we're dealing with here is AI is very powerful.

So there's actually a dual dilemma here where AI has the power to make private actors more powerful than they've ever been before.

Folks have, you know, some people have criticised me for that.

Or you think as a private actor you're more powerful in government?

Actually, I've been worried about the risk of that for the longest time.

Like I'm worried about that this technology abstractly in the hands of government, not just the US government, other governments, democratic governments, autocratic government has the ability to vest them with unprecedented power as well.

And so when we talked about fully autonomous weapons, I said there were two objections.

The first one I said was reliability.

The second one I said was was what I would call oversight.

And this is the idea that, you know, right now you have an army of human soldiers, and there are norms about serving in the military.

You're supposed to follow orders, but, you know, if something crazy, if something crazy enough happened, the soldiers would say, I'm not going to do that. Right.

You have basically, you have a bunch of norms about, you know, how soldiers serve, right?

You know what they see their duties to be.

What if you have an army of 10 million drones instead of, you know, 10 million human soldiers?

What what are the norms of the of the of the AI driven drones?

I think if we handled this wrongly, you could have a situation where there's a very small number of people or one person who has their hand on the button and kind of, you know, controls those 10 million drones.

And we need to answer these questions because we don't want to make companies more powerful than the government, but we also don't want to make government.

So powerful that it that it can't be stopped.

We have both problems at once.

So you basically just said to me, this technology is too powerful to be in the hands of a few private companies, and it's too powerful to be in the hands of a government.

That is unfortunately, the situation where.

Loading...

Loading video analysis...