LongCut logo

The Hidden Cost of OpenAI’s Pentagon Deal? Trust.

By Hard Fork

Summary

Topics Covered

  • OpenAI Pentagon Deal Triggers Backlash
  • Elite AI Talent Holds Leverage
  • Anthropic Revenue Explodes Amid Fight
  • AI Nationalization Inevitably Looms
  • Prediction Markets Fuel War Profiteering

Full Transcript

Well, Casey, we are now in week two of this incredible highstakes drama that's been playing out between the Pentagon and America's leading AI companies.

There's been a lot going on. We now have more clarity on why the deal between Anthropic and the Pentagon fell apart.

Uh we also know how this Anthropic supply chain risk designation is actually going into effect and impacting the way that government agencies are responding. And we have been learning

responding. And we have been learning this week about how Open AI's deal with the Pentagon is shaping up. So, lots to discuss here, but first we should make our disclosures. I work the New York

our disclosures. I work the New York Times suing OpenAI and Microsoft in perplexity over alleged copyright violations.

>> And my fiance works at Anthropic.

>> Okay, let's start with OpenAI because they are sort of the late arrival into this story, but in some ways the most dramatic. Since Sam Alman announced last

dramatic. Since Sam Alman announced last Friday that OpenAI had arrived at an agreement with the Pentagon, we have learned a little bit more about that agreement. Uh, as a reminder, according

agreement. Uh, as a reminder, according to Sam Alman, this agreement did include some prohibitions on domestic mass surveillance and autonomous weapon systems, basically the same two red

lines that Anthropic had uh set out that were causing them so much trouble with the Pentagon. And I think it's fair to

the Pentagon. And I think it's fair to say like this provoked one of the biggest backlashes in that company's history. It really did. We've seen it

history. It really did. We've seen it across social media. Uh many uh sort of top upvoted posts on OpenAI related subreddits have been condemning this

move. Open AAI has been scrambling to

move. Open AAI has been scrambling to try to rebuild trust. But at the end of the day, Kevin, I think both the Pentagon and Open AI are saying to the public, you're just going to have to

trust us. And the public is saying,

trust us. And the public is saying, well, we don't.

>> Right. So, there's been a lot of people cancelling their Chat GPT subscriptions and switching over to Claude as a result of all of this. People who don't agree with the Trump administration or uh the

the stance that the Pentagon has taken here. And presumably because they're

here. And presumably because they're seeing some uh you know, some some pain in the cancellations department as well as just a general feeling that this narrative is not going well for them. Uh

Sam Alman has been doing some damage control. So on Saturday, he hopped on X

control. So on Saturday, he hopped on X to uh talk about this and answer questions about the Pentagon deal. He

was joined by two other employees and these questions were sort of the the kinds of things you'd expect. You know,

people asking what did you guys agree to that Anthropic didn't? Where are your red lines? Who's going to be making the

red lines? Who's going to be making the kinds of hard decisions during something like a war uh about how these models can and can't be used? What about this domestic mass surveillance thing? So I

think he answered these questions but really the thing that they did was also to release the language of this contract that had been uh in dispute that had been the subject of so much speculation.

Well they released what they called the relevant portion of the contract but then we would see later commentary from experts in government procurement that said essentially look until we see the

entire contract it's just very difficult for us to take at face value the idea that this is the only relevant language here. Right? So they did not release the

here. Right? So they did not release the whole contract, but they did release some relevant language from this contract with the Pentagon in a blog post. Then on Monday, Sam admitted that

post. Then on Monday, Sam admitted that he made a mistake. He said, "We shouldn't have rushed to get this out on Friday." He also added that it looked

Friday." He also added that it looked opportunistic and sloppy. And he also opportunistic to point a phrase.

>> Yes, it was slopportunistic.

And uh it includes the language quote the department understands this limitation to prohibit deliberate tracking, surveillance or monitoring of US persons or nationals including through the procurement or use of

commercially acquired personal or identifiable information. I found this

identifiable information. I found this all slightly confusing. Casey, do you understand what OpenAI has said and the various evolutions of its position on this? Well, I I think the the key

this? Well, I I think the the key takeaways here is that they are saying that they have put in some amended language that will prohibit certain uses

of their systems by the government. So,

for example, they're going to prevent uh the government from using commercial data that they sort of acquire legally and sort of running that through GPT

models for domestic surveillance. I just

want to say though that there is always a high risk here for what I would call Jedi mind tricks, Kevin, and for the government because we have seen Democratic and Republican presidents do this, right? Of sort of going to the

this, right? Of sort of going to the absolute limit of what the law will allow when it comes to surveillance of Americans. And a way that they'll get

Americans. And a way that they'll get around that is by saying, "Well, we're not doing surveillance, Kevin. We're

doing some intelligence gathering."

Right? And so as annoying as it is to fixate on the semantics here, I'm telling you that whether or not you personally are surveiled will come down to semantics, right? And so that's why

we're digging in the way we are. So

still lots of questions about some of the details here. Um and uh I think there's a lot of doubt and concern among some employees of OpenAI that this actually did end up in a place that

they're comfortable with. Boy, is there some of that employee discontent spilled over onto X where you had some employee saying essentially that they didn't trust their leadership either. An

employee named Leo Gao, an OpenAI employee uh called the contract language window dressing and pointed out that it still seems to give uh the Pentagon control over when to deploy autonomous

weapons and uh just doesn't do much to address some of the other loopholes. And

then uh maybe more dramatically, Kevin, on Tuesday, Max Schwarzer, who was uh the post training lead for OpenAI, a vice president of research at the

company, announced that he was leaving.

And in his ex post, while he was pretty vague, suggested that this was an important time and that he had come to really respect Anthropic's values and so he said he's going over to work there.

>> Yeah. So, what's your take on how the damage control is going for Open AI? Do

you think they have warded off the the most uh heated criticism or are people still really mad?

>> I I do not think that they have stemmed the tide. They put a lot of effort into

the tide. They put a lot of effort into changing the narrative here. When I saw that they were doing that X AMA, that they had put up a blog post that they were quoting at least some of the contract language. I thought these guys

contract language. I thought these guys are really going for it. Um that also told me that they were really scared.

But here's the thing to remember, Kevin.

Most Americans just don't like AI very much. They didn't in the first place.

much. They didn't in the first place.

they didn't like it for all the normal reasons of, well, my social media feed is filling up with slop and my manager's telling me I have to use it every day or I'm going to get fired. When you add into that mix, it's potentially also going to be used by your own government

to spy against you or maybe kill you with a murderbot. Of course, Americans are going to say, "Well, this freaking sucks, right?" So, I think this was kind

sucks, right?" So, I think this was kind of the strategic uh miscalculation that Sam Alman made was that at least according to him, he thought he was going to get into this uh this dispute and sort of uh be able to deescalated

and sort of come in as the white knight and save the AI industry from the overreach of the US government. And what

he found out instead is they're still they're kind of holding the bag of all of the discontent that the Pentagon whipped up with this force policy change. Yeah. It's really interesting to

change. Yeah. It's really interesting to me because I think my assumption had been that we were sort of over the era of like worker empowerment in Silicon Valley, right? Like years ago, sort of

Valley, right? Like years ago, sort of precoid, we had all these like Google walkouts and all these employee protests over these military contracts. And I

think a lot of CEOs and leaders at these companies sort of said we're not doing that again. Like we're not going to give

that again. Like we're not going to give our employees veto power over the deals that we make or the contracts we sign.

And it it suggests to me what is going on at OpenAI right now that at least for them in their specific case where you do have uh you know this staff of elite technical talent that are not easily

replaceable there aren't that many people who know how to like build and train these models. Um you actually do need to keep them happy and so those people maybe only those people have significant leverage still. Yeah. I let

me make a um a sort of sweeping uh generalization right like I think there's sort of like two two major camps at at OpenAI. one out of the camp that have sort of been there for, you know,

let's say three plus years that are the the real experts that you just mentioned that have this kind of critical knowledge for how to build next generation frontier systems that almost nobody else in the world has and those

people tend to just really care a lot about how the technology is used. These

are like these are people who joined OpenAI in part because it was a nonprofit, right? And like there is like

nonprofit, right? And like there is like a solid core of those folks who are still working there. And then there's a group at OpenAI that I'm just going to call the Meta people.

Like the people that came over from Meta a little bit more recently that are um you know maybe a little bit more flexible in what they're willing to see their company do. And I don't think that they're going to raise a big stink about

this. The problem if you're open

this. The problem if you're open leadership is you actually need that original core, right? If you're going to build a GPT6 and seven that is going to blow everybody's minds, those are the people you're going to need. And so yes,

almost everything that we have seen over the past few days of they've tried to do damage control is aimed at those people.

>> Okay, so that's a little bit of the drama going on at OpenAI. What is

happening at Anthropic?

>> Printing money in two words, I would say. Well, I mean, you know, I I wrote

say. Well, I mean, you know, I I wrote this in my newsletter this week, Kevin, but has an American technology company ever had such a good week and such a bad week at the same time?

>> Explain. Well, so on the bad side, obviously they're in a very heated fight with the Pentagon that continues. By the

way, it seems like there is still some risk that perhaps the president will try to invoke the Defense Production Act to try to compel uh Anthropic to make the version of Claw that it does not want to

make that would sort of do its bidding.

And it seems also that the supply chain designation risk is now official. We

learned on Thursday that the Pentagon sent a formal letter to Anthropic. So if

nothing else, this is going to result in a long and costly legal battle as Anthropic tries to ensure that American companies can still use it for non-military purposes. Right? So there

non-military purposes. Right? So there

is actually an existential threat to the company that is buried somewhere inside there and it is uh by no means over, right? But on the good side,

right? But on the good side, >> on the good side, Bloomberg reported this week that Anthropic is on track to hit $20 billion in annualized revenue.

At the start of 2025, Kevin, they were on pace to earn about $1 billion in annualized revenue. So, this company has

annualized revenue. So, this company has 20xed over the past year. They were on pace to make about $9 billion by the end

of 2025. So, it has doubled in barely

of 2025. So, it has doubled in barely over 2 months, which speaks to the rise of claude code, right? And the

overwhelming adoption of Claude in the enterprise. So in that respect, this

enterprise. So in that respect, this really has become maybe the fastest growing American technology company of all time.

>> Yeah. And like what's so strange about this sort of dual quantum state right now of anthropic is like at the same time that they are printing money and people are signing up for claude and they're switching from chat to GPT and like things appear to be going well from

them. At the same time they are also

them. At the same time they are also being pulled out of the federal government right forcibly. Um there was some reporting this week by Reuters that the US State Department uh has sort of

started to comply with this order from President Trump to sort of stop using anthropics models. They have switched

anthropics models. They have switched the model powering their sort of in-house state department chatbot from uh anthropics models to open AI according to this memo seen by Reuters.

And furthermore, this Reuters report said that the state department is going back to GPT 4.1. Now, if you have been not been tracking all of the model names and numbers as closely as we have, that

is several generations ago. That's like

a that's like a a 20 an early 2025 model. And basically what that means is

model. And basically what that means is that the average college freshman with a chat GPT subscription now has access to substantially better AI tools than the Department of State. It's not great for

a lot of uh reasons, Kevin. And one of them, as uh the blog Lawfair covered this week, is that there appears to be no statutory authority for the president to do what he did. There is not a statute that lets the president just

sort of declare that uh that federal agencies cannot use individual software.

But because this is just the way the Trump administration works, everyone has just decided to defer to the president.

>> Yeah. I I want to ask you about this other sort of interesting piece of OpenAI's response over the last week, which is that Sam Alman has said uh multiple times that he wants the

Pentagon to extend the same deal to Anthropic that it extended to Open AI.

Do you think that is sincere? What is

going on here? Why is Sam Alman saying, "Hey, if you're making these terms available to us, you should also make them available to other AI companies." I

think that that is the the part of Sam that appears to be sincere in saying that he wants to deescalate this conflict. He does not want the United

conflict. He does not want the United States government to come in and nationalize the AI companies, at least not right now, right? And so maybe if uh if OpenAI could uh reach some sort of

agreement that would provide at least some protections for Americans and other AI companies would sign on to it, that would just release the pressure on the industry overall. Now, of course, at the

industry overall. Now, of course, at the same time, it would buy him a lot of cover and all of a sudden people wouldn't be mounting these uh quit chat GBT campaigns because Sam could be on X saying, "Well, you know, Claud's doing the same thing." Do you think that's

real? Like, how big a deal do you think

real? Like, how big a deal do you think this consumer uh opposition is? I mean, I I you know, I I am somewhat jaded on this point because I can't count the number of times that people have said, you know,

oh, we're we're all going to cancel our subscriptions to this thing or we're going to delete Uber or we're going to, you know, quit Facebook in protest and like it it never really seems to have much of an impact. But like, do you

think in this case that enough people are mad about this at the consumer level that it could actually impact their business?

>> Not really. I think you're exactly right. I think that usually these things

right. I think that usually these things just tend to blow over in a few days.

Um, and I'm sure that OpenAI is counting that on that. At the same time though, Kevin, um, I think back to the lesson that Meta learned, which is that as it had its own series of controversies, by

and large, people did not fa quit Facebook. They did not quit Instagram.

Facebook. They did not quit Instagram.

But you know what they did do? Just kind

of start to hate Meta as a company and develop really low trust in that company. And that winds up hurting Meta

company. And that winds up hurting Meta in all sorts of ways. And the particular way, by the way, that I think this is going to hurt OpenAI is they're gearing up to go out and build a lot of data

centers around this country. And there's

already enormous backlash and that we are seeing, right? It we we're starting to see it creep into our politics. And

so, if they are not able to sort of reverse the narrative and convince people that AI is going to have like hugely positive outcomes in their lives, I think you're going to see the data center opposition ramp up as a proxy for

people's just kind of distrust of that company in general.

>> Right. It's the it's the visible physical symbol of all of this and for most people the only one that is like anywhere near them. And so I think you're right. It could turn into a

you're right. It could turn into a political problem for them even if people aren't canceling their chatbut subscriptions on mass.

>> I want to ask you about something else that I've been thinking a lot about this week which is this idea that you mentioned of nationalization. There's

been a debate happening on social media about this idea that if we are headed to a world with very powerful AI systems in it as Dario Amade calls it a country of

geniuses in a data center that eventually that will just not be allowed to happen inside a private corporation that the US government whether a year or

two years or 5 years from now at some point will step in and say hey you guys built this really cool thing that's really useful and has all these like important geopolitical and national security implications. We're going to

security implications. We're going to just take that now and you work for us now. And I'm curious what you make of

now. And I'm curious what you make of that as a possibility because some people who I consider quite serious and credible um have been talking about this

threat of nationalization for several years now. Yeah. If you go to the sort

years now. Yeah. If you go to the sort of nerdy AI conferences that Kevin and I do, this comes up a lot at the tabletop role playing games that people do during lunch, right? is that at some point uh a

lunch, right? is that at some point uh a government of of one or more countries kind of steps in and takes over the AI lab. I understand in this moment that

lab. I understand in this moment that that feels like a kind of sci-fi scenario, right? Like most of the time

scenario, right? Like most of the time when you're using chat GBT, you probably don't think this is a dangerous super weapon and we need to ensure that uh you know this is being controlled by the

president. At the same time, we are now

president. At the same time, we are now at war with Iran. We know that these systems are embedded in the like command and control operations of the military and so to some extent they are already

becoming weapons. Right? So if you say

becoming weapons. Right? So if you say to me do I think that once these systems become 3 4 5 10 times more powerful the government will want to take an interest

in them and potentially oversee their development and deployment. I absolutely

believe that will happen. I see no reason why that would happen. And

unfortunately, how that goes, I think, depends a lot on the quality of the government that is overseeing that AI, right? And like what do they want to do it? Do they want to

use it to create opportunity and safety and democracy for all or do they want to, you know, mount an authoritarian uh takeover of the globe. So if you are a leader at one of these companies and you

know that you know at least until 2028 we are likely to have sort of the same administration in power um if you believe that the technology is rapidly accelerating such that a year or two

years or 3 years from now we might have something like a superhuman country of geniuses in a data center. Um what does that mean you should do? I mean one thing that I've been thinking about is like should these companies be doing

deals with the government at all? Right?

If if the lesson of the past couple of weeks is that the federal government is not a trustworthy counterparty in these negotiations and is going to insist on total control and obedience or else

they're going to try to nuke your company. Like I think a very rational

company. Like I think a very rational response from these AI companies will be like, well, we're just not going to make any more deals with you. You're going to have to use some open source models for your state department and your your military and your treasury because it's

just too risky for us as a business risk. Um, and you can't be trusted with

risk. Um, and you can't be trusted with it. I I could see why that may seem

it. I I could see why that may seem somewhat rational to them, but like I don't think that that is the tack that they're going to take. I mean, you even this week after everything that has happened with Anthropic, Daario Amade is

still out there saying we were very close to an agreement with the Pentagon.

We liked working with the military. We

want to work with the military again, right? So, I think that's very important

right? So, I think that's very important to note like Dario did not like throw up his middle fingers like on his way out the door. he is still trying to reach

the door. he is still trying to reach some sort of agreement. And I think in part that likely is to avoid the exact sort of scenario that you are describing, right? It's you kind of want

describing, right? It's you kind of want to like keep the tigers at bay for just a little while longer at least uh while you maybe like uh think through the rest of that scenario, which is admittedly a

very difficult one.

>> Yeah, I I've been um rereading the making of the atomic bomb this week, which is Dario Amade's favorite book, and he used to give it to all anthropic employees, and there's still like a a bunch of copies at their headquarters.

it's sort of the the company uh book as far as their their mission goes. And

they see a lot of parallels between what they're building and uh the Manhattan Project. And so I went back and I've

Project. And so I went back and I've been rereading it. And the piece that struck me from that experience was just right before the the bombs were dropped in 1945, there was this point where the

scientists got really worried about how their creation was going to be used. And

a number of them from the Manhattan Project sort of created these petitions and reports and tried to get them to the government and say like, "Hey, could you guys like not use this against a city at least as like a firstline uh you know

act of war?" And the military and the government sort of like pretended to hear them out and then they just went ahead and bombed Japan anyway.

>> And there was sort of this moment where it was like >> we we hear you. You're the scientists.

you're the geniuses who made this all work, but now you're playing in our turf and so we're going to control the technology from here and like thank you for your input. And like I think the comparison between the Manhattan Project

and the AI industry is somewhat overstated and I I think it breaks down in some key ways. One of which is like that was a government project. You know,

the Manhattan project was paid for by the government. These were government

the government. These were government employees. Um the what we're talking

employees. Um the what we're talking about now are private companies that have been developing this thing outside the public sector. Um, so I think there's some important differences, but I do worry that we are headed toward a

moment where this stuff just gets so useful to governments and militaries and confers such a decisive advantage to the countries that control it that the US government no matter kind of who is in

power is just going to say like this thing is too important to be left to the private sector. Well, I mean, keep in

private sector. Well, I mean, keep in mind that one of the original ideas for OpenAI was that it should be a government funded project, but Sam Alman and his co-founders just came to the conclusion correctly, by the way, that no government would give them the amount

of money they needed to build this technology, right? And uh, you know,

technology, right? And uh, you know, they just sort of quickly came to the conclusion that it was just going to have to be a private enterprise. But,

you know, going back to the earliest days, there was thinking among the people that created this technology that the government was going to take an interest in it eventually. Another

reason though, Kevin, why I find the current situation so vexing is that you and I both covered uh President Biden's executive order on AI, which I personally felt like was a pretty gentle

way of attempting to regulate the industry. It was sort of like, you know,

industry. It was sort of like, you know, inform us about your safety testing, please, when you test these new models and sort of, you know, told federal agencies to get ready for this

technology. and the howls of protest on

technology. and the howls of protest on the right that said how dare you know this administration come in and try to put these fedters on capitalism. We are

going to lose to China because of this sort of nanny state behavior. And then

to see those same people come to power and now say we are going to tell you exactly how you are going to build your models, what they are going to do for

the military or else we will destroy you. Is just like the whiplash is

you. Is just like the whiplash is insane. Yeah, we we we didn't like that

insane. Yeah, we we we didn't like that government trying to control the tech industry, but this government trying to control the tech industry, that's just business as usual. That's fine. Right.

>> So, I guess my worry, zooming out from all all of the stuff that's been going on for the past two weeks, is that we are sort of living through like an early dress rehearsal for what something like

nationalization of the AI companies could look and feel like. I don't think it's going to be as sort of cut and dry as like it was during World War II when like the government showed up to a bunch

of like steel plants and was like, "Hey, we run these now." I think it's going to be kind of this soft nationalization like we've been seeing over the past week where it's like a little pressure to build your models differently. Oh,

maybe could you remove some of those safeguards? Oh, maybe this is actually

safeguards? Oh, maybe this is actually so strategically important that we need to be the people putting the the clauses in the constitution of Claude or whatever that dictate how it will behave

in these high stakes situations. And I

think that is a more likely direction, but I would not take full sort of like brute force nationalization off the table entirely. I think there's a decent

table entirely. I think there's a decent chance that something like that happens.

>> Well, maybe we should set up a prediction market for it.

Speaking of prediction markets, when we come back, we'll talk about how prediction markets have made it to war.

So predictable.

Okay, Casey. So the other big news from the past week is that the United States is now at war in Iran. And one angle that really has been sticking out to me

about this is the role that prediction markets are playing in this conflict.

Because I think that is something that we truly have not seen before. Yeah, it

seems like every new war brings along some grim new technology. And I would say that prediction markets are maybe grim technology number one for this conflict in Iran. Yes, it's a grim

technology already, even absent the war and and now just with the war, it has become even grimmer. Um, and we've talked about prediction markets on the show. We talked about them way back in

show. We talked about them way back in 2023 when they were sort of this new thing that was like kind of in this legal gray area that wasn't really being done at any scale yet. It was sort of an

interesting idea. Now, of course, you

interesting idea. Now, of course, you cannot walk down a street in a major American city without seeing one and probably multiple ads for prediction markets like Kali and Poly Market.

>> Yeah. the sort of gambling mania that has taken over all media and advertising uh you know from DraftKings to FanDuel has now extended even further into these

prediction markets. So both Poly Market

prediction markets. So both Poly Market and Khi the two leading prediction markets platforms took a lot of heat this week on bets they were allowing their users to make on questions related

to Iran. So, Khi, which is kind of the

to Iran. So, Khi, which is kind of the more regulated US-based prediction markets company, uh does not allow bets on war or assassination, but it did

allow the question Ali Kani out as supreme leader uh basically sort of as a kind of careful proxy for betting on the outcome of a war or a strike on Iran.

>> Yeah. And uh out, I suppose, could have, you know, many meetings. You know,

perhaps there would be a sort of gentle democratic revolution in Iran. But I'm

going to assume that most of the people who were wagering on that one uh assumed that he was going to be killed in war.

Yeah. So people got really mad at Khi for allowing these bets on the fate of the Iranian leader. They also got mad when Khi sort of voided this market and said that it was going to reimburse uh

anyone who may have lost money on this.

Basically make sure everyone ends up in the black. But people who were supposed

the black. But people who were supposed to make a bunch of money because they correctly predicted uh the death of Kamani were mad that they didn't get paid out their expected winnings. So

just a big cluster all around.

>> And I just want to say if you were one of the traders who did not get your expected winnings from the death of the Ayatollah, I just want to say I don't care and it doesn't matter.

So Poly Market, the other sort of less regulated offshore crypto-based prediction market was even more permissive. They allowed people to bet

permissive. They allowed people to bet on the dates of strikes on Iran and other details related to the war in Iran.

>> Their policy was really like, imagine the worst thing you could do on our platform. You can do that.

platform. You can do that.

>> Actually, they did draw a line when it came down to markets that allowed users to bet on the likelihood of nuclear detonations by specific dates. So, sorry

to anyone who is trying to cash in on nuclear war. These woke liberals that

nuclear war. These woke liberals that won't let me bet on nuclear explosions need to go, Kevin. So, no one was happy about this. Senator Chris Murphy posted

about this. Senator Chris Murphy posted that quote, "It's insane this is legal.

People around Trump are profiting off war and death." And also said that he was introducing legislation to ban this.

And there are also a bunch of people looking into whether any of this has been uh done v via insider trading.

Basically, do you have people in the military or close to the decision makers in this conflict placing bets once they have this sort of non-public information about what is going to be happening?

>> Yeah. And and I think it speaks to why allowing prediction markets to take bets at least around sort of like, you know, war and death is so corrosive and bad,

Kevin. Because not only is it just kind

Kevin. Because not only is it just kind of like grim and like how do we live in this society where you know uh gambling on war and death is is become a sort of

form of entertainment but also you're just creating incentives for like the worst things in the world to happen which doesn't seem logical to me. Well

and it's not even a theoretical harm here. Recently, Israel arrested a number

here. Recently, Israel arrested a number of people uh who were accused of using classified information to bet on military operations on poly market. So,

this is already starting to happen. And

I think this is why people like Senator Chris Murphy are so alarmed about this.

Not just because it's sort of like gross and aesthetically offensive to have people betting on wars, >> although it is. But yeah, although it is, but also because it could create direct incentives if you're a member of

the military and you're, you know, your commander gives you an order to go do an air strike on an Iranian compound to log onto your phone and head over to one of the prediction markets platforms and say, "You know what? I could make a

couple couple grand off this."

>> Yeah, that's your little caution bonus.

You know, this is not theoretical at all, Kevin. In fact, your colleague Amy

all, Kevin. In fact, your colleague Amy Fan at the Times wrote that um it is relatively uncommon for someone to bet a significant sum of money that a US strike will happen within the next day.

But just last Friday, more than 150 accounts placed hundreds of bets of at least $1,000 correctly predicting that there would be an American air strike on Iran by Saturday. Yeah. So, I think one

of the interesting things here is like I I am not like a a blanket opponent of prediction markets, right? I I sort of bought some of the kind of theoretical arguments for why something like a

prediction market could for example uh outperform political polls because it would incentivize people to like come up with really good polling data and like use that to trade on and you could end

up with kind of a better picture of a given election or people will like say what they really think because their their money is at stake and they're not just trying to like impress a pollster.

>> Yes. And you've actually had some of the people who are in charge of these prediction markets sort of talking about the fact that insider trading can be good because it can get the best

information to the markets as quickly as possible and kind of like give people an unfiltered understanding of what the real insiders are thinking. Now of

course officially you are not supposed to be able to insider trade on these platforms right the the they all have policies against it. uh Khi, the most regulated US platform that allows for prediction markets, says, you know,

they've investigated people uh that it is actually uh illegal per the CFTC, which is their main regulator, to place uh bets using inside information. But

there are a couple problems with this.

One is the CFTC is a tiny agency. It

doesn't have a huge team of enforcers going out to investigate what I assume must be hundreds or thousands of trades using insight information on their platform every day. It's also not clear

what is public information and what is private information. Um, you know, there

private information. Um, you know, there are certain types of information in the stock market that are considered material non-public information that it is illegal to trade on. But it is also

legal to, you know, fly a drone over an oil facility to see how their production is going or to park outside a store and see the foot traffic going in and out and use that to sort of calculate how

well their sales must be going. I find

it suspicious how much you know about the insider trading rules. I have to say I didn't know you had this much facility with the law here.

>> I'm calling my lawyer. But of course, this is part of the appeal of prediction markets in general is that they incentivize people with good information to trade on that information.

>> Yes. And if you allow people to wager on almost anything, how are you ever possibly going to police the entire platform to understand who is insider trading and who isn't?

>> Yes. So, I think in this specific case of war, I think it's very dangerous for some of the reasons that we've talked about. Not only do you have military uh

about. Not only do you have military uh officers and and and service people uh disclosing classified information in some cases to uh to sort of make a

little extra for themselves. But you

also have just this incredibly strange war profiteering uh innovation where like you can just go on uh one of these platforms and try to make a bunch of money from something that involves a lot

of devastation and destruction. You

know, the other thing that comes to mind for me, Kevin, is that, you know, as you say, the the prediction market backers, their argument is like, this just helps us understand the world better, right?

This is a a new kind of information that helps us see more clearly. And yet, as I look across all of the trades that you just described, I don't understand really what I was supposed to see more

clearly, right? Like maybe you get a,

clearly, right? Like maybe you get a, you know, a brief heads up about something horrible that is about to happen. Maybe that's, you know, useful

happen. Maybe that's, you know, useful in at least some circumstances. But for

the most part, I just don't feel like we actually have a much better understanding of the world because all of these bets are happening. Yeah. And I

think in this specific case, that's especially true because if you actually look at the markets that were uh being traded before this strike on Iran, the conventional wisdom of the crowd was

that this was not going to happen. It

was a very low probability. I think

something like 17% probability uh on one of these platforms an hour before the strikes. So, these markets aren't

strikes. So, these markets aren't actually distributing the best possible information at all times. They're just

kind of like aggregating vibes uh until like someone with inside information shows up and like makes a fortune. Well,

I think that's exactly it. It isn't as if uh these have been adopted by the mainstream and everybody's placing these sort of casual bets and now we have this like be beautiful perfect understanding of the world. What we have, as you say,

is a bunch of vibes plus some insider trading. and it just doesn't actually

trading. and it just doesn't actually seem that useful to me in practice for most things.

>> Yeah, I want to try to like sort of steelman the defense of prediction markets here and see what you make of it. So, I think someone who believes

it. So, I think someone who believes that these prediction markets are good in the aggregate might say something like the following. People have been

betting on war forever. They bet on the stock prices of defense companies. They

bet on things like oil prices. That is

all legal. We consider that sort of part of the normal markets. Those things all fluctuate when you have a war breakout.

How is this any different? Your

response? Well, I think that it is actually really meaningful that these are indirect ways of betting on war, right? It seems very unlikely to me that

right? It seems very unlikely to me that if I like, you know, buy oil stocks assuming that they are going to go up that I'm creating an incentive for somebody to assassinate the supreme

leader of Iran. But wasn't this the whole conspiracy theory about the war in Iraq was that it was just motivated by like Dick Cheney owning a bunch of stock in Hallebertton?

>> Well, I I mean, yes, that was like the conspiracy theory. Um, you know, I I

conspiracy theory. Um, you know, I I don't know that that was what was actually driving it. I think that, you know, as with most wars, at least at that time, there were sort of like a number of of interrelated factors that were going on. And, you know, maybe oil

was one of them. But my point here is just that when you have um the betting at some sort of meaningful remove from the action, it just like feels better

for me. It doesn't create the same

for me. It doesn't create the same horribly grim incentives that this particular approach does.

>> Right. I think the difference for me is the directness that you mentioned. And

you know, one thing that came up over and over again when I was talking to people about prediction markets a couple years ago for this story is the assassination markets get really dark because if you have something like, you

know, will this world leader uh, you know, be removed from power in air quotes uh before a certain date, that could actually create a bounty on that person where someone might go out and

say, "Hey, if I want to make money on this, I I need to like kill this person before this date." And you know what is going to be the first thing that actually takes action on that, Kevin?

Open claw. Mark my words. One of these bots plugged into a Mac Mini is going to see a prediction market for the assassination of a world leader and it's going to say, "Well, I have some ideas about that." So, I think most people

about that." So, I think most people agree that like the assassination prediction market is sort of, you know, out of bounds and is a bad idea for lots of reasons. But I think there is still a

of reasons. But I think there is still a lot of gray area around these questions about conflict and war and politics. And

I think it is the the risk here is that these prediction markets have gotten so popular so quickly with so little regulatory oversight that um it is just kind of legal to do a bunch of stuff on

them that it's not legal to do in the regular stock market.

>> Yeah. Well, so you mentioned that some lawmakers have talked about introducing legislation. Uh my experience is that

legislation. Uh my experience is that that kind of legislation typically doesn't go anywhere. What, if anything, do we know about what is going to happen as this war continues to unfold in Iran when it comes to these prediction

markets?

>> I mean, I think the Trump administration is very unlikely to do anything to sort of stop the growth of prediction markets. Uh, we've already seen them

markets. Uh, we've already seen them signal uh via these sort of regulatory actions that they've dropped against poly market that they are not going to take a firm line against these prediction markets. We've also seen them

prediction markets. We've also seen them adding members of the Trump family to their advisory boards. So, I think all of these prediction markets are sort of becoming entangled with the administration in ways that are going to

make it very hard for them to do anything. But I certainly expect like

anything. But I certainly expect like Democratic lawmakers to stand up and say like, "What the hell are we enabling here? Why are we allowing people to bet

here? Why are we allowing people to bet on the assassination of world leaders or the outcomes of a war in Iran? This just

feels all incredibly fraught to me. Mhm.

My my fear is that we're in a sort of time race where like if uh Democrats were able to like somehow advance some legislation, maybe they win some seats in the midterms, maybe they retake the

presidency, maybe sometime within the next few years, they could meaningfully reign these prediction markets in. I

think though if they continue to grow, my fear is that they will become a massive entrenched interest group like the crypto world and that and they will then uh lobby to ensure that Democrats

and Republicans both feel like they have a vested interest in these things sticking around. So, you know, I I my

sticking around. So, you know, I I my fear is that if we're to do anything about some of these excesses we've been talking about today, it needs to happen soon or otherwise uh platforms like

Kishi and Poly Market might just have too much money uh for that to happen.

Yeah, I have a proposed rule for these prediction markets, which is that you should have to go to a physical place like you do for a casino. I think that putting the stuff on people's phones, making it super easy for them to do it.

Like if you want to go bet on the war in Iran, you should have to like go to a Sidi like OTB betting place to do it.

Like you should have to like put in some effort. It should not be as easy as

effort. It should not be as easy as whipping out your phone.

>> All right. Well, it's very interesting, Kevin. I predict we're not going to try

Kevin. I predict we're not going to try that. No. I also predict we're not going

that. No. I also predict we're not going to try that, but it's a good idea.

People should listen to me.

Loading...

Loading video analysis...