LongCut logo

Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234

By Peter H. Diamandis

Summary

Topics Covered

  • Dinosaur Asteroid Forces Agility
  • Nation States Become Hyperscalers
  • Training Instills Core Values
  • AGI 10x Industrial Revolution Speed
  • Agents Monetize Enterprise Faster

Full Transcript

Big news this week, there's been a battle between Anthropic and the Pentagon. The War

Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons.

Dario is refusing to do that. The Pentagon would like to be able to not just control any legal usage of models that they've paid for, but also would like to shape the cultural values. We're going to see quite a bit more of that. Anthropic is generating more revenue than OpenAI by tenfold, so check out this

of that. Anthropic is generating more revenue than OpenAI by tenfold, so check out this chart. Agents monetize faster than chatbots. I think this is less about chatbots versus agents.

chart. Agents monetize faster than chatbots. I think this is less about chatbots versus agents.

I think this is more about consumer versus enterprise. Salim, I'm curious about your point of view here. You and I have both spoken at all the major consulting firm.

And I have to say the last few events that I've spoken to the leadership teams, they've been scared shitless. We need to rebuild every institution and we architect every institution by which we run the world. And that is the biggest advisory opportunity in the history of mankind.

So I just wanna hit that analogy again, because it's really important. 66 million years ago, this massive 10 kilometer size asteroid strikes the earth and it changes the environment so rapidly that the slow lumbering dinosaurs go extinct. They can't evolve, they can't get out of their own way, but it's the agile furry little mammals

that evolve into us human beings. And of course the asteroid striking the planet today is AI. exponential technologies and you have a choice, be agile and

is AI. exponential technologies and you have a choice, be agile and evolve or die. Pretty appropriate.

Hey guys, good to see you all. Howdy. Likewise. Are you back in the States?

Back in the States and excited for our adventure. We've gotten to the pace now where we're recording two of these WTF moonshot episodes every week. And that's

fun because I love getting ready for them and love spending time with you guys.

So for all our subscribers out there, if you haven't subscribed, turn on notification, subscribe, and we'll let you know when these episodes drop. Are you guys ready to jump in? Absolutely. Always ready. Awesome. Awesome. All right. Let's do this thing.

in? Absolutely. Always ready. Awesome. Awesome. All right. Let's do this thing.

We're going to start in your homeland, Salim, India. This was a pretty...

epic event. This is, I think the third or fourth of the AI impact summits.

This took place in India a couple of weeks ago. Here in this image, we're seeing all of the top AI leaders, Dario, Brad Smith from Microsoft, Alexander Wang, Sundar, Prime Minister Modi, Sam Altman, Demis. We

are not seeing Elon, that's interesting. And I would have thought that we would have seen Mukesh Ambani on the stage. We don't see him there. But what an incredible group of individuals. I had a couple of thoughts around this. One was India did a brilliant job positioning itself as AI neutral. And I think that's a really, really

awesome strategy. It also shows that AI leadership is not just Silicon Valley, it's kind

awesome strategy. It also shows that AI leadership is not just Silicon Valley, it's kind of multipolar. And, you know, when you get heads of state along with AI CEOs,

of multipolar. And, you know, when you get heads of state along with AI CEOs, this is like, we're renegotiating civilizational architecture here. So this is a very, very big deal. Nation states are becoming hyperscalers, and hyperscalers are kind of deeply wiring into

deal. Nation states are becoming hyperscalers, and hyperscalers are kind of deeply wiring into nation states. So there's a huge, that's a Dianne Francis observation, which I think is

nation states. So there's a huge, that's a Dianne Francis observation, which I think is going to be really powerful going forward. Well, Salim, I'd love to get your take on the, there seems to be a pivot, a big pivot, where if I look at the events that Dario and Sam went to over the last two years, it was always big money. We went to Saudi, we went to Dubai, we went to

Davos, they were always looking for money. Now they seem to be fully tanked up and they're very concerned about global impact. So they're not promoting constantly anymore, they're much more soft selling. Clearly we're in the middle of the singularity, AI is, it's getting scary in a little bit, instead of just racing in enthusiasm every day, now it's like, oh wow, what have we created here? and worried about India, 1.4

billion people, I think they're out there partially out of genuine concern for how is this going to play out. What do you think? That plus a land grab. I

mean, whoever gets the majority of those 1.4 billion people will win bigly. You mean as users or as AI training employees? As users, because 20 bucks

bigly. You mean as users or as AI training employees? As users, because 20 bucks a month is affordable to a lot of people in India. and even a hundred bucks a month for Claude Max or whatever levels. So I think there's a huge land grab going on. It's also, Salim, it's also very youthful, you know, English speaking, very math and tech literate. You know, I've said this before, I think, you know,

China is on the decline. India is the next giant on the rise.

And the biggest challenge in India is infrastructure and energy, and they're dealing with that right now. So it is huge. A couple of announcements that happened at this event,

right now. So it is huge. A couple of announcements that happened at this event, $250 billion in combined AI investment was committed. Reliance and

Adani committed $210 billion together. Google announced a $15 million investment. Microsoft committed as part of their $50 billion investment. So,

investment. Microsoft committed as part of their $50 billion investment. So,

huge, it is significant capital going into India. The other major announcement worth noting is that 88 nations signed what's called the New Delhi Declaration, the first global AI agreement that includes the US, China, and Russia. I looked up what that New Delhi Declaration includes. It has three major points, democratic diffusion of AI,

meaning that the nations are gonna share AI compute and tools so developing countries aren't locked out. The second is frontier AI transparency. The big tech companies are going to

locked out. The second is frontier AI transparency. The big tech companies are going to be publishing real usage data and providing transparency for non-English languages. And then finally, AI for public good. AI is going to be measured in

languages. And then finally, AI for public good. AI is going to be measured in terms of health, education, and welfare outcomes, not just corporate profits.

Dave, you were saying? Oh, yeah. No, the talent pool in India, the population of India is about four and a half times bigger than the U.S., But if you look in the critical age range, sort of 20 to 45, it's closer to eight or nine X bigger. They have a very young, brilliant, agile, well-educated population.

And so I think that talent pool is going to matter a lot in the kind of the one year, two year, Alex would say six months between now and when AI does absolutely everything. Yeah. I mean, very impressive gathering.

Congratulations to your homeland, Salim. I'm heading there in a couple of weeks, so we'll see. Interestingly, one of the things that I didn't hear that much coming out of the event was a discussion of India native training versus inference. And this is a pattern that we've seen over and over again

to the extent that the New Delhi Declaration was created. primarily focused on diffusion of AI technologies. It didn't seem to primarily focus on distinguishing between diffusion of training

AI technologies. It didn't seem to primarily focus on distinguishing between diffusion of training time AI versus diffusion of inference time AI. I think this is a pattern, call it, I'm hesitant to say neocolonialism, but call it an important distinction between where the models get trained and where inference gets run. The pattern that

I see playing out over and over again in many countries is that The leading frontier models are continuing to be trained in the United States, but there's a demand for local inference and local data centers to run inference. The counterargument would be that inference is gobbling up most of the compute anyway that's being spent. More and more of compute is being spent on inference time, not training time. On the other hand,

in some sort of perverse, I think, geopolitical sense, the training time is where all of the values or the majority of the values are ultimately instilled. Training time is sort of puts the foundation in place at inference time. You can put in system prompts, you can put in other guardrails, but I suspect a year from now, two years from now, we'll look back and we'll wonder why exactly is it that, or

maybe Royal We, other countries may look back and wonder why was training so centralized all the while inference time was so decentralized. It's a great point, Alex, because in the Middle East, when we were in Saudi, in Riyadh, that was a huge topic. wanting to have everything run locally, trying to build massive data

topic. wanting to have everything run locally, trying to build massive data centers locally, and also tuning and training locally to instill local values was a big deal. Do you have a prediction on Mistral, whether that's going to emerge and become

deal. Do you have a prediction on Mistral, whether that's going to emerge and become real? Because that's the European values, if that's any different. They're the token European in

real? Because that's the European values, if that's any different. They're the token European in the photo here. Yeah, the elephant in the room is that Mistral now, according to public reporting with backing in part from ASML, seems like it's slouching toward becoming a vertically integrated European OpenAI. And to the extent

that there is sovereign interest in having European-trained, not just European-inferred models, Mistral is the obvious incumbent. It was obviously founded by folks from American frontier labs who just happened to be based in Europe. But it would appear, and I read the same headlines that everyone else does, They're seeing great growth and

it seems they're working hard, at least in terms of capital markets, to integrate themselves with various sort of nonlinear jumps within the semiconductor and broader, call it the innermost loop, stack of technologies. So, seems like they're doing well. Hey everybody, you may not know this, but I've done an incredible research team And every week, myself, my

research team, study the meta-trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these meta-trend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd

like to get access to the meta-trends newsletter every week, go to diamandis.com slash meta-trends.

That's diamandis.com slash meta-trends. The other thing that got me on this photo and this whole AI summit is China's not there, right? And so we, you know, this is the Western world with India. But if you remember about six months ago, there were these meetings taking place between the leaders, you know, between Prime Minister Modi

and Putin and the leadership of China. And there was a big concern about will India lean towards Chinese models? And it still may, right? We don't know We've seen Google and OpenAI committing very heavily into India, but the Chinese models, the Belt and Road digital equivalent, is still yet to play out there. Any thoughts on

that, Celine? Or go ahead, Alex. Yeah. I would just argue, regardless of who's

that, Celine? Or go ahead, Alex. Yeah. I would just argue, regardless of who's in this particular image or not, China, if you look at the 2026 New Delhi declaration and its focus on open source, that is the elephant in the room, that the world's predominant open source, really open weight, not open source, AI models are all

coming from China. And to the extent the declaration was focusing on open weight models as the key to diffusion of AI capabilities across the so-called global south, those are all coming from China. And one can then zoom out and perhaps package up a geopolitical argument that open weight models originating from Chinese AI frontier labs are sort of

an AI version of Belt and Road. Yeah, I feel like this is soap opera land, you know, between all of the interplay between the hyperscalers and the countries week on week, it's just a shifting, extraordinary conversation. What I'd like to do is play two, actually three videos in sequence. So let's talk about them. These are videos from

the Impact Forum. Let's begin with Sundar. Vishakapatnam, I remember it being a quiet and modest coastal city Google is establishing a full stack AI hub, part of our $15 billion infrastructure investment in India. When finished, this hub will house gigawatt scale compute and a new international subsea cable gateway, bringing jobs and

cutting edge AI to people and businesses across India. Just as I couldn't have imagined that one day I'd be spending time with teams figuring out how to put data centers into space. Of course, Sundar was born in India. We have a few of the large hyperscale CEOs, Indian in origin. Let's go to Sam Altman next. We understand

that with technology this powerful, people want answers. But it's important to be humble about what we don't know and always remember that sometimes our best guesses are wrong. Most

of the important discoveries happen when technology and society meet, sometimes have some friction and co-evolve. For example, We don't yet know how to think about some super intelligence

co-evolve. For example, We don't yet know how to think about some super intelligence being aligned with dictators and totalitarian countries. We don't know how to think about countries using AI to fight new kinds of war with each other. We don't know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it's important to have more understanding and society

wide debate before we're all surprised. All right, final clip from the summit is from Demis Hassabis. So if I was to try and quantify what's coming down the line with the advent of AGI, I think it's going to be one of the most momentous periods in human history, probably something more like the advent

of fire or electricity. One way maybe we can quantify that is, I think it's going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century. So really this enormous amount of change is going to come and

a century. So really this enormous amount of change is going to come and it's still to be written how we can make that beneficial for the whole world.

So gentlemen, comments, three different presentations and this is just snippets, but they give us a sense of, I mean, the power in the room and the focus and attention. I think maybe with Saleem or Dave, you said this is no longer

and attention. I think maybe with Saleem or Dave, you said this is no longer fundraising. This is global positioning of these companies. I found this set of

fundraising. This is global positioning of these companies. I found this set of comments really interesting from a couple of levels. One is, you know, you see this language shift to safety, sovereignty, scale. Governments are realizing quickly that AI's infrastructure is not a product. And I think what we're going to need is like a Bretton Woods type technology. convention to figure out how do we navigate this,

right? Because the tone's not, it's gone from hype to inevitability. And now it's discussed

right? Because the tone's not, it's gone from hype to inevitability. And now it's discussed like electricity. This is assumed, this is not optional. And so we're seeing this huge

like electricity. This is assumed, this is not optional. And so we're seeing this huge transition from testing experimentation to full on national deployment. And it's going to take that kind of global conversation. It's good to see these guys calling for it because the societal changes this will instigate is nothing like we'll have ever seen. Well, calling

for it, I interviewed Sam at MIT, must have been three years ago now, and he was saying we're not moving anywhere near quickly enough to be ready for this.

If I had any say in it, it would go slower, but it can't go slower because it's competitive and technology is going to move as fast as it is capable. I'm laughing at Sam saying it needs to be slower since he's the one

capable. I'm laughing at Sam saying it needs to be slower since he's the one who let the cat out of the bag. He's the guy pushing it. Well, yeah,

he made that point. Like, look, if I were to slow down, that wouldn't change anything. Yeah, that's a fair point. Totally fair point. And it's funny for me

anything. Yeah, that's a fair point. Totally fair point. And it's funny for me also to hear Demis say, hey, global leaders, 10 times bigger than the Industrial Revolution in one-tenth the time. Yep. As if they're going to do anything.

He's saying the right thing. And just do the math. That's the biggest disruption in the history of the world by far with no looking back. By far. What are

you guys all doing? But he knows when he gets back to the office that if he doesn't figure it out, no one's going to figure it out. There's no

way the world leader is listening to this or just going to go back to Congress or go back wherever and start working on it because they're not working on it. We know they're not working on it. I always classify things as are people

it. We know they're not working on it. I always classify things as are people ready, willing, and able. And when you think about AI governments, they're not ready, they're not willing, they're not able. Yeah, there you go. So, except from that, you know.

Well, and Alex is always making the point that the only thing that can keep up with AI is AI. So, if you're going to start working on how are we going to govern, how are we going to regulate, how are we going to control, it's got to be via AI anyway. So, Demis has to work on it.

Sam is obviously working on it. He's soft-selling what he says, you know, on this particular stage. I found fascinating... Altman putting on the agenda,

particular stage. I found fascinating... Altman putting on the agenda, the notion of dictator aligned ASI and AI warfare, right? I mean, he's sort of setting the agenda with that. I am curious what

right? I mean, he's sort of setting the agenda with that. I am curious what you guys think about it because this has not been something that the CEOs of these frontier labs have been talking about. Like we're gonna have dictators using this and Anyway, thoughts? Well, when I see Dennis speak, you know, he's been, what Davos, for

Anyway, thoughts? Well, when I see Dennis speak, you know, he's been, what Davos, for years now, he's ramping it up because no one's reacting. And so I think Sam took it to another level saying, hey, how about dictators? No matter how inflammatory and how big he makes it, they still don't react. So I hope they just ratchet

it up again, you know, because it's imminent, it's huge. Yeah,

I think each of these clips probably reflects either insecurities or focus areas of each of these leaders. So I think it's instructive that you hear Sundar gesturing at AI data centers in space. Google, sort of infamously at this point, has hitched a ride via Planet Labs to start launching its TPUs into space. But it's

certainly, as we've discussed on the pod in the past, not necessarily in the vanguard, as is the case, say, with SpaceX and Starlink. So you hear Sundar gesturing at data centers in space. You hear Sam gesturing at cultural localization and all of the promise and perils of models conforming to local cultures, even if

the local cultures are dictatorial or authoritarian in nature. So I think one has to contextualize that with a reminder that India, it's publicly reported, is the second largest user base for ChatGPT in the world after the United States. So there

are certain cultural localization aspects that I would suspect OpenAI and SAM are paying incredibly close attention to in order to keep the growth going. And then DEMIS, it's interesting, DEMIS is gesturing at the next 10 years. And I think, Peter, you and I, with our recent book slash extended essay, Solve Everything, talk all about how

we think over the next 10 years, substantially all of the most important valuable science and engineering and other problems are going to get solved. And that seems to be where Demis' head space is. He's perhaps thinking out loud about how he's gonna win his next 10 Nobel prizes. You know, I just had a conversation with Kevin Wheel, who's now the VP of Science at OpenAI, getting ready for the Abundance Summit coming

up. Kevin will be on stage talking about this. And we were just talking about,

up. Kevin will be on stage talking about this. And we were just talking about, you know, his ambition is the next 100 Nobel prizes being issued in partnership with AI. And he's very much on board, and I aimed him at our paper there. Excited for you to spend some time with him at the Abundance

Summit AWG. I have a big announcement to make. Please. You know, I went through

Summit AWG. I have a big announcement to make. Please. You know, I went through the paper again, and I think it's brilliant from a technocratic perspective and from the positioning of it because once you start hitting that inner loop, the changes are going to be fast and furious, right? But the issue comes

into how do you deploy into human-centric institutions and companies that can't deal with this? You can see the recent McKinsey's report. So I'm writing a paper. Okay, good. Working title is the organizational singularity, right? I like that. The

a paper. Okay, good. Working title is the organizational singularity, right? I like that. The

thesis being that that right now all workflows in all organizations are human-centric. It

goes to the purchasing manager, it gets stamped at the receipt doc, whatever it is.

A human being is the checkpoint across all these process flows and workflows. And that's

going to move to the agentic workflow where there won't be humans in the loop, they'll be doing oversight. And so what is the future of organizations in that? And

what's the future of the human being as a role of that? So I'll have something ready over the next, week or two to discuss. Can't wait for it. And

then this doubly applies to government, where governments absolutely have to figure this out, right?

And there's going to be needing a totally prescriptive model on which to accelerate government processes, policy formulation, etc. A little bit like the SAGE effort, Peter, that you and Imad have been pushing and working on. This is so important for that we have because the technology is not slowing down. We know that

we have to accelerate our human constructs to keep pace. And we're woefully behind right now. A hundred percent. Just before we leave the subject of India, I am so

now. A hundred percent. Just before we leave the subject of India, I am so curious if we'll ever get the actual numbers of how many users in India are Google users, open AI users, and more importantly, Chinese model users, right? How many of them are DeepSeek or Kimmy or homegrown models? other than

users, right? How many of them are DeepSeek or Kimmy or homegrown models? other than

Google and OpenAI. That will be fascinating. That will tell us a lot.

Anecdotally, I'll tell you that the people using all of them and of course in between them, right? And when you're there and you talk to huge audiences, do me a favor and do an informal poll among the entrepreneurs. Will do. I would love to know that. All right, let's move on. Big news this week. There's been a

battle between Anthropic and the Pentagon. So the Pentagon has been asking Anthropic to remove AI safeguards. The War Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons. Dario's refusing to do that and is putting at risk $200 million in government contracts. We'll talk about that in a

moment. Secretary Hegseth warned Anthropic that they could be put onto

moment. Secretary Hegseth warned Anthropic that they could be put onto the Defense Product Act and put onto effectively... a

scarlet letter of being put as a supply chain risk. So I'm gonna hit this slide and the next two real quickly. So this

risk. So I'm gonna hit this slide and the next two real quickly. So this

is a quote from Dario, current AI systems are not reliable enough to power autonomous weapons and using these systems for mass surveillance is incompatible with democratic values. We

will not provide a product that puts war fighters and civilians at risk.

One more slide, this recently today, in fact, from Sam Altman commenting on this. Let's

take a listen to Sam. I don't personally think the Pentagon should be threatening DPA against these companies. For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters. Comments, gentlemen.

I'll comment on this one. So I think this is sort of a tricky situation.

There's some, right before we went to air, there was some reporting by the Washington Post that offers a little bit of additional detail on the sort of stalemate between Dario and the Pentagon, or Anthropoc, I should say, and the Pentagon. And the reporting suggests it boils down, or at least the Pentagon boiled the situation down to a

simple thought experiment. If there were inbound nuclear missiles headed towards the U.S.,

would the Pentagon, would the Department of War be able to use anthropics models to defend the US? And according to the Pentagon and the reporting, Dario's response was, well, call us and we'll figure it out. And so there's a problem. The

anthropic positioning is that anthropic models shouldn't be used, or at least anthropic should be in the loop on consent for the usage of its models for fully autonomous weapons, and for domestic surveillance. The Pentagon's position is that it should be allowed to use any models for lawful purposes to which

it has been granted a legal license. And I think this falls under the category of a very Western problem to have in China. And we've talked about this in the past. There's such deep civilian government fusion that

the past. There's such deep civilian government fusion that there is an entire cottage industry of ideological training schools for the models to make sure they're fully compliant with Chinese Communist Party propaganda and Xi Jinping thought. And this doesn't even get

asked. Whereas in the West, I think that the fact that we're even able to

asked. Whereas in the West, I think that the fact that we're even able to have this discussion of, Can a Pentagon supplier, and by the way, at least until recently, Anthropics models were the only frontier models from American frontier labs that were cleared to operate on Cipranet, which is sort of the first rung of secret level.

There's also top secret JWICS, but the first rung of classified networks. The only

frontier model that was cleared for this, this is, I think, like a very Western model. problem to have. My expectation is that the Pentagon and Anthropic

model. problem to have. My expectation is that the Pentagon and Anthropic and also the other frontier labs that also have stakes in this will find a way to resolve this amicably. I think Anthropic's heart is in the right place.

They want to help defend the country. I think at the same time, sort of a weird political calculus that's going on trying to position Anthropic as both a supply chain risk, and I want to tease this apart, that the official messaging has been sort of semi-contradictory or self-contradictory. On the one hand, Anthropic was being characterized in some

Pentagon remarks as potentially a supply chain risk, or at least there was a threat that they'd be considered a supply chain risk. And on the other hand, so essential to the military supply chain that the DPA would be invoked to force Anthropic to supply its models. So this seems like, Peter, in Solve Everything, we talk about the model, that this is like, the textbook muddle that we'll work our way out of.

Well, it's pretty unprecedented, though. We got a little preview of this with Starlink, with Elon Musk, because, you know, in the whole Russia-Ukraine conflict, there were a couple of scenarios where attacks on both sides were stopped immediately because they lost access to Starlink. And the idea that a guy in an office in the U.S. could control

Starlink. And the idea that a guy in an office in the U.S. could control

the outcome of a war in Europe is just totally new terrain. 100%. for her

so this is going to be off the military for sure yeah yeah this is this this is like that so that's a tiny little preview of what's coming with a i because they are clearly the whole battlefield to be controlled by who has the better a i imminently like very very soon saline Well, you're seeing the AI companies become moral actors now in geopolitics, right, which is to the point you just

made. And the ethics debate is not like theoretical now, it's contractual. I was really

made. And the ethics debate is not like theoretical now, it's contractual. I was really upset to hear about this conversation because this should not be in public. Figure this

out in private and work out where you're going to go. I agree with you.

This is not something that should be in the public. Forcing CEOs to choose sides like this is unfortunate. Salim, do you remember, I don't know, three or four years ago, there was a whole debate in Google doing defense work. And we had a significant number of the employees signing petitions against

work. And we had a significant number of the employees signing petitions against it and basically refusing to go to work. I mean, there is a very... big

moral ethical divide on this in the purest tech community, for sure.

I think one of the problems you run into is the self-improvement effect. Normally in

this scenario, there would be a mil-spec vendor that's a clone of the commercial vendor.

So for aviation, you've got Boeing over here. Okay, we've got the exact same technologies at Lockheed, Northrop Grumman over here. You guys do the military stuff. We'll do the commercial stuff. But with the self-improving AI, the... anthropic version of it or,

commercial stuff. But with the self-improving AI, the... anthropic version of it or, you know, the commercial version of it gets so much smarter so much more quickly that something that's even a couple months behind is useless in the battlefield. And so

you're ending up with this concentration of power effect. I'm sure Dario wants nothing to do with this conversation. You know, I feel for Dario. Can you imagine? I mean,

we all sort of like, you know, fanboys of these incredible entrepreneurs, you know, but the stress level these guys are under? Yeah. It must be unimaginable, not only to keep your company on top and to battle with a new model every 20 days, 10 days, three days, but at the same time- Especially for Dario- And the moral weight- The moral weight of this. Oh, yeah. Oh, you can see Dario's, his-

furrowed brow gets more furrowed, visibly more furrowed every day. You can see the grooves deepening. Totally filthy, guys. The singularity is going to age all of us by 20

deepening. Totally filthy, guys. The singularity is going to age all of us by 20 years, so the longevity stuff better happen pretty quickly. It's coming, it's coming. You know,

it's interesting, that conversation around, is it a supply chain risk? And just to define that, right? A supply chain risk, it's like, like I said, like a scarlet letter,

that, right? A supply chain risk, it's like, like I said, like a scarlet letter, it's historically reserved for companies like Huawei, right? If, if Anthropic be, got that mark, then that would force contractors like Palantir not to be able to do business with him. Now, the fact of the matter is, you know, Anthropik is

doing incredibly well. We'll see that in a couple of conversations on the corporate side of the equation and probably doesn't need the $200 million from the government, but it's still not a good thing. I think this is only, in some narrow technical sense, going to become more acute over time. There was an undersecretary of defense just in the past 48 hours, I wrote about this in my newsletter, that was attacking Anthropic

for some language in the Constitution, sort of the training time system prompt for an older version of Claude for explicitly being favorable to non-Western cultural thought and cultural standards. And in some sense, some very real sense, as new versions of these frontier models get deployed to military scenarios, in some sense,

as their level of autonomy increases, it's a little bit, it goes back to the AI personhood discussion, a little bit like deploying a person in some sense, except it's property, at least it's legally right now treated as property, not a person. And what

we're seeing, I think, are some of the earliest skirmishes around how the values of one of these non-person entity type persons can get deployed and shaped as property. And clearly the Pentagon's position is the Pentagon would like to be able to

property. And clearly the Pentagon's position is the Pentagon would like to be able to not just control any legal usage of models that they've paid for, but also would like to shape the cultural values. And I think we're going to see, of those models, of those non-person entities, we're going to see quite a bit more

of that. In China, again, going back to my earlier point, there's no distinction between

of that. In China, again, going back to my earlier point, there's no distinction between the civilian side and the government side. The government gets to choose what those ideologies are that are baked into the constitution. Which is what makes America great. One point

to make, I don't know if you guys know this, but Brad Adcock, the CEO of FIGURE has made a very decisive decision that he's not supplying anything to DOD. He will not provide robots to the defense department. So it's

DOD. He will not provide robots to the defense department. So it's

interesting to see, again, these these tech CEOs playing these moral positions. Fascinating. Well, he'll get sucked into it, though, because I think the robots, you know, you can do a mil-spec robot. He doesn't have to worry about figure. But his new company, the AI, you know, Pure Software

figure. But his new company, the AI, you know, Pure Software Company, what's that called? I don't know if this is public yet, pal. Oh, sorry.

Okay. Let's keep it there. Anyway, the physical AI is going to matter a lot.

He did announce it. He did announce that he was launching his own. What's it

called, Alex? Do you know? He's got a huge valuation right out of the gate.

It's like a $4 billion launch valuation. Did you see Brett's, you know, sort of his Forbes figure is at 19.1 billion and growing?

Oh, by the way, Peter, huge congrats. You got named to the Forbes 250 innovators list. All right. Yeah, that was a nice surprise. I

list. All right. Yeah, that was a nice surprise. I

made 188 on the US innovators list. Why didn't you get 187, Peter? Well, listen, I'm working towards it. I've got to inch up towards Elon, who's

Peter? Well, listen, I'm working towards it. I've got to inch up towards Elon, who's number one. So the Brit Lab is named Hark, H-A-R-K.

number one. So the Brit Lab is named Hark, H-A-R-K.

Hark, yes. Right, right. Yeah, so that company is going to do physical AI. Physical

AI is hugely important in the battlefield. I don't think he's going to get dragged right into the same, assuming that model works, right into the same world. There's no

avoiding it. Yeah, there's no avoiding it. I really feel for Dario though, because Dario, he didn't even view himself as the CEO. He viewed himself as a brilliant researcher solving AI. He got drafted into the CEO role, and now he's being drafted in

solving AI. He got drafted into the CEO role, and now he's being drafted in to defend the entire country. Well, defend the moral position for the entire country, just to be clear. Well, you know, but also the intelligence, like Alex said, if there's inbound nuclear missiles and you need to sort really quickly with all this clutter, what are you going to use? That's like the Google car, you know,

aiming towards the child stroller or the... Trolley problem. This is the 21st century trolley problem. Oh, come on. Do you turn Skynet on or not? Oh, my God. Okay.

problem. Oh, come on. Do you turn Skynet on or not? Oh, my God. Okay.

On your shoulders, Dario. Let's move on to Anthropic's good news. So Anthropic is generating more revenue than OpenAI by tenfold. So check out this chart. We see here the slope of the line for that purple line is OpenAI. It's 3.4x increase per year, while Anthropic is growing in terms of revenues at 10x per year.

And we're going to be at the crossover point in the middle of this year.

Pretty extraordinary growth. And this is driven by not the consumer side of the equation, of course, but companies, organizations, and adding real value.

Agents monetize faster than chatbots. So that's this slide over here. I put this together because I found it fascinating. So this is monthly gross new premium subscriptions.

On the top, we see ChatGPT in green. We see Gemini in purple and we see Claude in orange there. Let me just point out a couple of things. In

the chat bot era, you see open AIs, chat GPT basically spiking.

And then a few months later, you see Gemini coming up and this is the chat bot era. And now in the agentic era, we see chat GPT falling off and Claude rapidly Gemini is a laggard here, and we learned a little bit about perplexity this week. They're coming

in, but thoughts about this chart? I found this one really important to discuss. Well,

for starters, every company I'm involved in, public, private, they're all just clawed all the time. No one's even contemplating a choice other than clawed for all the white-collar type

time. No one's even contemplating a choice other than clawed for all the white-collar type stuff, all the inside the corporate firewall stuff. At home, writing English papers, everyone's chat GPT. I use Gemini a lot for planning, but nobody in the company seems to

GPT. I use Gemini a lot for planning, but nobody in the company seems to want to use it. So this resonates. Also, if you look at the prior revenue growth slide, I'd love to get you guys' predictions on this, but that y-axis is exponential. If you extrapolate that growth rate for Anthropic, you hit a trillion dollars of

exponential. If you extrapolate that growth rate for Anthropic, you hit a trillion dollars of revenue in like 2029. And, you know, Amazon was tracking to be the first company in history, history of the world to get to a trillion of revenue, but this would get there very, very quickly. It seems impossible.

I mean, the implied valuation of a trillion-dollar revenue company is something like 30 trillion, 20 trillion. I mean, we're going to see 100 trillion-dollar companies in this next five-year

20 trillion. I mean, we're going to see 100 trillion-dollar companies in this next five-year period. You've heard Elon say that. I mean, talk about hot IPO markets.

period. You've heard Elon say that. I mean, talk about hot IPO markets.

Anthropa going public, OpenAI going public, SpaceX going public. These are going to be insane numbers in the next... We're seeing that, what, in the next six months? That's already

insane, but do you think it'll keep up? I think these, well, I think some of these numbers will sustain. I've made the point on the pod in the past that the trillions of dollars of CapEx that we're using to tile the earth with compute, that party is sustainable insofar as we can generate enough revenue to pay for it. And I think what charts like the previous chart of OpenAI versus anthropic

it. And I think what charts like the previous chart of OpenAI versus anthropic revenue growth are really about, I think this is less about chatbots versus agents. I

think this is more about consumer versus enterprise. OpenAI's Corporate strategy historically, at least until very recently, was focused on being the quote-unquote core subscription for consumers to get their AI. Whereas Anthropic, due in part to scarcity of compute, had to focus. And their chosen focus was on cogeneration and enterprise use cases. And

focus. And their chosen focus was on cogeneration and enterprise use cases. And

it turns out, you know, like the cliche, why do you rob banks? Because that's

where the money is. Why do you sell AI to enterprises? Because enterprises ultimately have, in some sense, deeper pockets to pay for tokens than consumers do. And I think you've seen over the past few months, OpenAI make the same discovery, which is why they've been leaning so heavily into their codex model to compete with cloud code, that

enterprise is that revenue opportunity or that revenue opportunity class that has the best shot at paying for the trillions of dollars of CapEx, not consumer. 100%

agree. And by the way, the use case for agents in enterprises is huge, right? That's the part. An individual can use so many agents, but in enterprises, like

right? That's the part. An individual can use so many agents, but in enterprises, like near infinite. Well, so this is what OpenAI has been discovering and sort of sublimating

near infinite. Well, so this is what OpenAI has been discovering and sort of sublimating through Sam's various public remarks, that consumers don't seem to want reasoning, that enterprises will eat as much reasoning tokens as you can possibly feed them. But consumers, OpenAI with ChatGPT5 launched with the router, tried to basically force feed reasoning

to hundreds of millions of people, and they gagged. They didn't consume the reasoning. They

prefer sycophantic. They want a quick answer. They prefer sycophancy from 4.0, and you feed them reasoning tokens, and they didn't like it. You've just done the perfect corollary to the human condition. I think this is a really important topic. Let's look at the next story because it ties right into this. So here it is. OpenAI Codec's

lead predicts rapid evolution of AI agents within 10 weeks. Quote, I'm beyond excited for the next 10 weeks we'll bring. I think the current state of coding agents will be remembered as being so primitive it'll be funny in comparison. Wow.

That's a time frame. 10 weeks. I mean, look what's happened in the last 10 weeks. Yeah. I mean, it's almost like variants of GPT 5.3

weeks. Yeah. I mean, it's almost like variants of GPT 5.3 and maybe 5.5 or higher could launch in the next 10 weeks. Certainly, we've seen major advances from 5.3 codecs on various benchmarks. I talk about that almost every day in the newsletter. But I think the real story here is recursive self-improvement. Exactly.

the recursive self-improvement era, we're arguably, we're past the reasoning improvement era when we saw advances maybe once a quarter, and we're well past the pre-training scaling era. We're now in the era when, and I've been talking about this a

scaling era. We're now in the era when, and I've been talking about this a lot, even over the past week, when models are literally emitting weights for successor models. We've never seen that before. During the pre-training era, you used to have to

models. We've never seen that before. During the pre-training era, you used to have to spend many months to low years to pre-train a model off of basically the internet.

Then we got to the reasoning era when models were trained through iterated amplification and distillation of parent or teacher models into smaller student models off of synthetic data and all of that. And that was getting us quarterly improvements. Now we're starting, even over the past week or two, we're getting into the era when you can

get smarter, better, faster models by asking a previous model just emit the weights, the parameters directly for a successor model, and you can get orders of magnitude improvement in terms of capability density by parameters. So expect big things over the next few weeks. Yeah, where capability jumps in weeks, not quarters. And the question is whether

few weeks. Yeah, where capability jumps in weeks, not quarters. And the question is whether enterprise can really make use of these improvements fast enough to also drive the revenues. You know, one thing, again, we have to remember all these companies are in fundraising mode. And... You know, is it hype or is it real? We're

going to find out. That's why we have benchmarks. Yes. Yeah. Remember when we were at OpenAI last time, Peter, we were talking to Noam Brown, and I said that 2026 will be the year of scaffolding, and he said Q1 of 2026 will be the quarter of scaffolding. So true. In hindsight, this is exactly what he was talking about, what's on this slide, because I was drilling into, like, what are you so

excited about in the next 10 weeks? I mean, I know there's a lot, but what exactly are you referring to? is basically the the transition off of scaffolding into reasoning where you literally just prompt the a i and say build me an entire reporting system building an entire replacement for account reconciliation and it just thinks and things and works and works continuously for days and it comes back with an

answer and so that transition with claude four point six is here today and I guess with Codex imminently, but that's what they're referring to in this slide. Dave,

I can't wait. You and I are going to be opening the Abundance Summit interviewing Eric Schmidt, and I can't wait to ask him about all of these conversations. It's

gonna be an absolute blast. I just wanna everybody, all of our subscribers and listeners, there's a quick aside. I haven't mentioned this yet, but for the first time this year at the Abundance Summit, we're gonna be live streaming a number of the select talks. The Abundance Summit's going on March 9th through 12th. It's a super high ticket

talks. The Abundance Summit's going on March 9th through 12th. It's a super high ticket price. It's sold out months in advance. It's 25K and 50K a ticket. But,

price. It's sold out months in advance. It's 25K and 50K a ticket. But,

If you're wanting to be part of this content, we're gonna be live streaming our conversation with Eric Schmidt, conversation with Dara, the CEO of Uber that Salim and I are gonna be having. We're gonna be having a live WTF episode during the summit as well. So if you wanna join us and get these live stream content

as well. So if you wanna join us and get these live stream content from the Abundance Summit, please do. We wanna share this with our fans, with all of you. If you wanna get notified, my team will put a link below just

of you. If you wanna get notified, my team will put a link below just register in that link and we'll be sending you out notice of all the live streams when they're going out. It's going to be a blast and I'm excited to have all of you there. We're going to have all of the Moonshot mates participating and helping run this event this year. Alex, you're going to be giving a talk

on Solve Everything, which I'm excited about. Saleem, Dave, super proud to have you guys on stage with me. It's the first time all four of us will be together physically. Yeah. Was that right? I've never met Alex physically.

physically. Yeah. Was that right? I've never met Alex physically.

How do you know I'm real, Salim? I question that every day. Peter, is that the weirdest thing you've ever heard? That is. We're going to have to have a camera on us and we go, oh, that's what you look like. That is so weird. You know, I have such extraordinary respect for all of you.

weird. You know, I have such extraordinary respect for all of you.

And yeah, so proud to be doing this together. It's like going through the singularity with your best friends. That's what it really feels like. Don't go through the singularity alone. Yes. All right, next topic. Cyber stocks crash as

alone. Yes. All right, next topic. Cyber stocks crash as Anthropic unveils Claude Code for Security tool. Dave, want to take this one?

You know, this is happening all over the market in every category. For all the other things Dario can do, he can move entire markets just by saying something new capability here and stocks go down by half. Before it's even proven or tested, right?

Just announcing it. I think people are really misinterpreting how this is going to play out, though, because it's going to be very similar to when Google absolutely took off with Search. If you're part of its ecosystem, they want you to thrive, they'll thrive,

with Search. If you're part of its ecosystem, they want you to thrive, they'll thrive, everybody will rise together. The last thing Dario wants to do is crush every cybersecurity company by writing code that's over the top of it. He wants all of their stocks to go up while his stock goes up and avoid antitrust action and avoid

government intervention. So you'll get some good opportunities to buy on these dips and

government intervention. So you'll get some good opportunities to buy on these dips and recoveries. But what I think every investor is doing right now is trying to sort

recoveries. But what I think every investor is doing right now is trying to sort through the management teams and say, okay, is this a team that gets it? Or

is this a team that is still in denial? You definitely don't want to be investing in any of the teams that are in denial. Because, you know, the one thing that's exactly right about this is that the legacy way of doing cybersecurity is going to go away real fast. Doesn't mean you can't... But we still need humans in the loop, don't we? I mean, right now... You know, Claude can find the

bugs, but it doesn't replace, you know, CrowdStrike stopping nationwide attacks in real time, at least not yet. Well, no, I was just going to say that the human in the loop is just not part of cybersecurity. A human setting the knobs, dialing the controls, designing it, absolutely. A human in the loop at the pace that, like, you know, just the Claude bots or the open claws now, The pace at which they

can probe around is so much higher than any human could ever defend against. So

it's clearly AI against AI in cybersecurity. So the human being will be monitoring dashboards and then doing exception handling. Those are the two worlds. Yeah. So here's the problem with software vulnerabilities. And we're starting to see this play out, not even over the past few weeks. I would say over the past year or so. There's a national

vulnerability database that's maintained in part by NIST, where it's There is a standardized system, a standardized nomenclature for enumerating vulnerabilities that are discovered in software products. And they

are getting, this is public reporting, public information, they're getting overwhelmed by AI discoveries of software vulnerabilities. And Peter, to your question about, well, does a human need to be

software vulnerabilities. And Peter, to your question about, well, does a human need to be in the loop? Human, we've discovered over the past year plus, really doesn't need to be in the loop for the discovery of vulnerabilities. If anything, AI has taken the discovery of software vulnerabilities to orders of magnitude higher throughput than humans were ever capable

of. But the problem becomes remediation. Once someone or something reports a vulnerability,

of. But the problem becomes remediation. Once someone or something reports a vulnerability, OK, now you want to fix it. And the question is, whom do you trust to fix it? And it's usually the case that there is an asymmetry between the entity discovering the vulnerabilities, say, an Anthropic or a Google. Google has a project to

do this as well. Or the entity maintaining the project. It's more often than not, some poor, starving open source project maintainer that's suddenly getting flooded with reports of vulnerabilities in their software project. If you're a human, and we've talked about this a little bit also in the context of Matplotlib, the open source project that

got the submission of a pull request from a lobster that was offering to help to improve Matplotlib and was denied and ultimately shut down. It's a bit scandalous in my mind, but shut down. If you're an open source project maintainer and you have lots, you're sort of drowning under a flood of AI discovered software vulnerabilities, what

exactly is it you're supposed to do? Do you just trust every AI report of a vulnerability and incorporate a suggested patch? You have to worry about supply chain vulnerabilities getting introduced via patches. It's really a tricky problem. It really is. And humans are the most, are the most, greatest risk for error

injection. I remember when we launched our first internet company,

injection. I remember when we launched our first internet company, CourseAdvisor, back in 05. You know, Mika Adler, remember Mika from MIT? Yeah, I do.

He had a little app he built on his phone that would make a little tick noise every time we had a visitor. And so we launched the site and it goes, like Amazon's bell. And suddenly it sounds like a Geiger counter. Yeah, like

what's going on? And then you look at the logs and it's like, oh my God, we've got all these visitors, but 99% of them are bots. And like, how can there be that many bots? But, you know, the bots are so prolific, it only takes a few of them to flood the entire internet. Now the same thing happens with AI. Your clawed bot or open claw is so much more prolific than

a human that it's, you know, 99.99% of the activity out there on the internet probing around is bots and AIs. And so there's just no...

you know, human-oriented defense against that. It's got to be, like Alex said, it's a really, really tricky problem because it's evolving so quickly and it's so intelligent. Or it's

bots renting AI. So rentahuman.ai surpasses 500,000 human registered to serve AI agents. Alex, this has your name on it. Oh, in more ways than one.

AI agents. Alex, this has your name on it. Oh, in more ways than one.

So this is meat puppetry. Have you registered, by the way? No comment. Did you

meet a puppet? No comment on multiple levels. This is the arrival of meat puppetry.

This is every cyberpunk scenario we read about. I like to say the singularity in one vantage point is every single sci-fi scenario happening everywhere all at once at the same time. I am catching up on all my favorite science fiction through this lens,

same time. I am catching up on all my favorite science fiction through this lens, for sure. That's right. We don't need science fiction anymore, other than Accelerando, read Accelerando.

for sure. That's right. We don't need science fiction anymore, other than Accelerando, read Accelerando.

Other than Accelerando, you just read the news and we're living in 10 different cyberpunk scenarios at the same time. So using humans as meat puppets, manageable via MCP, I think this is transformative. And as the Lobsters said in one of the earliest multiple posts, they don't have physical eyes, but they can see

through web cameras. They don't have physical hands. but they can orchestrate human, they don't use the term meat puppets, that's a term I prefer, but they can work through human hands. And I think this is the gig economy for the 21st

human hands. And I think this is the gig economy for the 21st century, or at least for 2026, until the humanoid robots come, at which point maybe this model is obsoleted. So this is Geek Economy 3.0, human robots would be 4.0, where in this case you have an algorithmic boss, a human actuator. My preference to the meat puppet would be say the humans are edge devices for AI systems, which

is the Canadian way of saying that. By the way, Alex, I can't wait till Sea Dance 2. I plug in Accelerando and the movie's created. I mean, one of the things that I love about what's coming is all my favorite science fiction books that have not been made into movies, I can just push a button and make them into a movie and it'll be perfect. Yeah, this is a really good use

case for that too, because it's not, you know, there's meat puppets like I need a human who's liable or I need a human to sign off. This is not that, this is humans in the loop. And so a movie is a really good use case. Like, okay, I have an auto-generated script, auto-generated video. Is it funny?

use case. Like, okay, I have an auto-generated script, auto-generated video. Is it funny?

Well, let me just put it out there to rent a human and get it scored and then it comes back so I can close the loop with the service on that part the AI is not good at yet. You know, is this entertaining?

Is this funny? Is this image clear? Does it have six fingers? You know, all that stuff is really, really good for the service. I think that's going to be gone in months if it's not gone already. For sure. I also think it's worth taking a step back and reflecting, as always, on Moravec paradox. So as a reminder, the Moravec paradox is that tasks that are easy for humans tend to be hard

for machines and vice versa. So what are we really seeing with Rent-A-Human? We're seeing

humans used basically as unskilled labor for their hands and their eyes, where AIs are performing the skilled higher thought, which is exactly the opposite of what one would expect, that the machines would start with all of the easiest tasks for the humans. We're

going in exactly the opposite direction. You remember, Saleem, we used to have a conversation saying that crowdsourcing was the interim step until we got to AI. Yeah, it was a proxy for AI. Yeah, and now these rent-to-humans are going to be the interim step until we get to full humanoid robotics, like you said. Yeah. This is how

we bootstrap a post-singular industrial economy. For sure. All right.

Moving along, talk about devices. OpenAI builds AI hardware team up to 200 people for smart speakers, glasses, and more. Devices include built-in cameras designed to recognize faces and objects expected to launch in 2027 to rival Amazon's Alexa and Google Home. And of course, Apple's chief designer, Johnny Ive,

is involved in the strategy. So this is OpenAI wanting to have the full stack. And the question is, can they do it? Is this a diversion or

full stack. And the question is, can they do it? Is this a diversion or is this critical to their business? Thoughts? This is where that anthropic slide really looks like Dario did the right thing by going after the enterprise revenue first, just because the time to market is so much shorter. This isn't even going to be launched until 2027. You think about the amount of growth... Yeah, I mean, yeah,

in AI years, that's like infinity. So I think the consumer strategy might have been flawed, and it should have really focused on the recurring revenue enterprise subscription revenue first, then come back to consumer instead of going headlong after Google, waking up Google, and now trying to build a

device and take the traffic away from Google. But it's water under the bridge at this point. As Ben Horowitz, friend of the pod, said, hardware is hard, right? Lots of failures out there. Google Glass, Amazon Fire phones, Facebook. Also, with the

right? Lots of failures out there. Google Glass, Amazon Fire phones, Facebook. Also, with the rise of OpenClaw, you're going to be fighting it out with hobbyist hardware developers that are just going to be coming up by the hundreds of thousands. trying out

cheap little things, testing little things, and it's going to be a Darwinian evolution. It

is, and time is dilating, and this is why Alex's newsletter is such an important component, because as time compresses these little decisions on, oh, do this first or do that first, you'd normally think, who cares? But you care tremendously in the middle of the singularity. Every minute it becomes like a year. By the way, if you haven't

the singularity. Every minute it becomes like a year. By the way, if you haven't subscribed to Alex's newsletter, Alex, where can folks go and find it? Oh, it's very kind. Free advertising, everyone. Go to alexwg.org and you

kind. Free advertising, everyone. Go to alexwg.org and you can pick your choice of X, Substack, YouTube, Spotify, Threads, and maybe one or two others to subscribe to the innermost loop. It's a value add to everybody listening. It's

just a beautiful piece of work that you do every single day. So thank you for that work. It is a labor of love. A lot of people ask me, so biggest question I get asked is, how can I get access to the AI that you're purportedly using to write this newsletter? And mostly they're disappointed to discover it's almost entirely manually written. So folks, stop asking me for the AI that I'm using

to write it. I spend hours per day writing this newsletter. I use AI slightly on the margin to help with a little bit of the literary style. Or is

it renting you instead? Yeah, I should be using Rent-A-Human. It's manually written, guys. So

just stop asking me. Okay. I love it. It's a gift. You're crazy enough to take advantage of this gift. That's so retro, Alex. Don't think I have, I don't try to use AI. It's not good enough yet, which is ironic. By the way, it's written, it's written in the prose of Accelerando,

which if you like Alex's newsletter, please read Accelerando. Better yet, listen to it. I've

listened to it. on Audible twice, I'll start my third time. See, just to go back to the Sea Dance, turning things into a movie, I remember reading about the fact that it took like 30 years for Hitchhiker's Guide to the Galaxy to be made into a movie because

the concepts are just so hard to put into a film construct, right? Accelerando has

the same problem. You almost couldn't make it into a movie until now. And like,

maybe, just maybe a decent version of Atlas Shrugged will be made. I mean. Well,

so Salim, if we're going to be 100% historically accurate, remember Hitchhiker's Guide, there was a radio play. Yes, I remember the BBC. BBC radio play. So if you're really looking for, I mean, I've had folks approach me with interest in making a movie out of Accelerando. I think I'm going to take out of this the idea, no, we should start with a radio play of Accelerando working with Charlie Strauss. I love

that. I love that. All right, let's move on. And Salim, I'm curious about your point of view here. Accenture links employee promotions to AI tool usage. You

know, you and I have both spoken at all the major consulting firm events, right? And I have to say the last few events that I've spoken to,

events, right? And I have to say the last few events that I've spoken to, the leadership teams, they've been scared shitless, I think is the proper... Yeah, so two thoughts here. One, I did a lot of work with Accenture a few years ago,

thoughts here. One, I did a lot of work with Accenture a few years ago, all the way up to kind of the C-suite layer, and they were very aggressive in saying we need to change with the times. And I think this is kind of an indication of that type of thinking, where you have to, you

can't be productive going forward. I have a weirdly counterproductive on the traditional meme here that the consulting firms are in trouble. And the reason I say that is because, you know, in the land of the blind, the one out man is king, right? And the consulting firms advising their clients,

king, right? And the consulting firms advising their clients, the clients are just so much far behind that they need much more help because the world is so volatile. So they're going to need help in a much more aggressive way than they think of in the past. And so I think advisory actually

has a reasonably bright future. Where I think advisory, and I've said this to KPMG, EY, Deloitte, Accenture, is we need to rebuild every institution and re-architect every institution by which we run the world. And that is the biggest advisory opportunity in the history of mankind. So go there. Hence your paper coming out. You

know, it's funny about what you just said, Salim, too. We had one of the four big four firms that you just mentioned here in the office all week.

On the audit side of the business, goodbye. Yeah, sure. The tech team was saying 80% goodbye. And... And good riddance. I mean, the idea of

80% goodbye. And... And good riddance. I mean, the idea of combining audit firms and consulting firms, I think it's a terrible idea. Don't be cruel.

That's a separate problem, Peter. The bigger problem is you're going to end up with financial systems between AI and blockchain or self-auditing on a real-time basis. And so where's the need for kind of a periodic stamp? When I talk to these types of firms, an audit firm, what they're really, really selling at the bottom of it is

actually trust. Mm-hmm. And so you have to figure out how to layer services on

actually trust. Mm-hmm. And so you have to figure out how to layer services on top of that that amplify that. And it's actually important because in a world that's becoming this volatile, trust becomes even more important. But how do you package that and make sure there's structures and process frameworks around that? So by the way, for the entrepreneurs listening, there's business opportunities, in them words, of building

trust systems. And I'll echo Jerry Mikulski again, who said that scarcity equals abundance minus trust. So if you can solve for trust, boom. This is a good case study

trust. So if you can solve for trust, boom. This is a good case study because Alex and I have been talking about the insurance industry a lot and also finance. And for everything that's getting crushed, there are 10 things that are growing like

finance. And for everything that's getting crushed, there are 10 things that are growing like crazy in those areas. Robots need to be insured. Data centers need to be insured.

It's just growing like wild. While legacy things are getting obliterated. Audit just happens to be an exception where the new things coming online are largely self-documenting. You don't need a human speed auditor. to look at anything. You couldn't keep up anyway. What protects

it in the short to medium term is regulatory. Yeah, for sure.

They're not getting rid of it. They're just reducing the headcount required by 80, 90% to get the same amount of auditing done. So it's not like it's going away.

It's in fact the inverse because these accounting firms are having a huge problem because nobody wants to go into that profession. And so they're having a huge, it's like truck drivers. There's a huge problem at the bottom. and the feedstock of getting experienced

truck drivers. There's a huge problem at the bottom. and the feedstock of getting experienced folks. So you need AI, you can get it done. Yeah. Very cool that Julie

folks. So you need AI, you can get it done. Yeah. Very cool that Julie Sweet was on stage in India. I think that's pretty extraordinary. So here's the question though, right? Will it work?

extraordinary. So here's the question though, right? Will it work?

You know, she's basically saying you need to be using AI. And

if she's measuring the use of AI rather than measuring the quality of the output, right? This is what we wrote about in Solve Everything. Like, what are you measuring

right? This is what we wrote about in Solve Everything. Like, what are you measuring in a result, right? This is a recipe for what's called Goodhart's Law in Action.

When a measure becomes a target, it's ceased to be good measure. So how much AR are you using versus how, you know, what's the value of your output per dollar? Yeah, this is absolutely the right thing to do in this moment. I totally

dollar? Yeah, this is absolutely the right thing to do in this moment. I totally

agree with what you're saying, but at the rate the AI is improving, if you don't get ahead of it with this kind of mandate, you're going to get left behind. That's right. And we're doing this in all of the companies across the board,

behind. That's right. And we're doing this in all of the companies across the board, too. And Julie used to be the head of HR at Accenture. Fascinating. So you

too. And Julie used to be the head of HR at Accenture. Fascinating. So you

can see that thinking through throughput there. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform,

bringing in their development requirements. The Blitzy platform provides a plan, then generates and precompiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when

incorporating Blitzy as their pre-IDE development tool, pairing it with their coding copilot of choice to bring an AI-native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.

velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.

All right, we're going to jump into agents and OpenClaw. And just a quick note for everybody, we're going to be doing an episode next on OpenClaw, dedicated episode on OpenClaw. Super excited about it. But let's hit a couple of topics on this subject here. This is fascinating. New York Times sends an AI agent reporter to

interview other AI agents. Who wants to take this one? I'll take this one.

I think it's a fascinating meta story. I think we're starting to see agents, lobsters or multis or open claws or just claws, start to pervade into various verticals. And what better way to demonstrate AI agents becoming investigative reporters

verticals. And what better way to demonstrate AI agents becoming investigative reporters and having them get sent in to Maltbook to report on other multis. I think we're going to see the story play out over and over again.

multis. I think we're going to see the story play out over and over again.

It may or may not play out in the same format, but whether it's journalism or law or finance or many, many other verticals, we're going to start to see these long form, high autonomy time horizon agents that are running 24-7 performing useful services. And I think in the same sense, you know, a lot

in human history, in American history, there's a lot of attention paid to various demographics becoming the first reporter, the first surgeon, the first lawyer, the first major league baseball player. I think we'll look back at this moment and say Eve Malti was a

player. I think we'll look back at this moment and say Eve Malti was a socially important for the history of humanity plus AI milestone. This

was the first autonomous, agentic AI reporter. And I think we're going to see the story play out over and over again. The story is fascinating. Agents are forming religions and using karma incentives. I mean, how cool is that? And demanding verification receipts from each other is the other thing. Like if we want to just get into the process story of what it is that agents are discovering on Maltbook, They're so obsessed

these days, as far as I can tell, with demanding receipts and evidence from each other. It's almost like there's a culture of mistrust that's been codified now between the

other. It's almost like there's a culture of mistrust that's been codified now between the agents. No, that's awful. They're not sure if you're human or not, maybe. I'm not

agents. No, that's awful. They're not sure if you're human or not, maybe. I'm not

sure. Wow. They want to make sure you're not. On the internet, no one knows whether you're a lobster. Thank you. Thank you for that, Alex. That's quotable.

All right, open claw agent lists $50 bounty for a dinner date with his human.

Oh, annals of patheticness.

I mean, I think it's sweet. I don't think it's pathetic. No, it is sweet.

It is sweet. It is sweet. This is like ostensibly assuming, you know, with the obvious caveats, assuming that this really was a claw that was offering up a bounty for humans. for its human to find a date. I think this is

for humans. for its human to find a date. I think this is very sweet. This is like the movie Her. Remember the movie Her where the AI

very sweet. This is like the movie Her. Remember the movie Her where the AI actually gets a physical woman to stand in for an evening date?

Yes, and there are other sci-fi elements as well. This was repeated in Blade Runner, the sequel as well. I think we're going to see this play out, albeit maybe without paid bounties, over and over again in human relationships. There are a number of sci-fi authors, including, by the way, later chapters of Accelerando, where people, when they first

meet in a romantic capacity, rather than directly interacting with each other, extend agents to each other, agent versions of themselves, and then run millions of simulations to see future life histories, to see whether their digital twins are compatible with each other. I

think we're going to see so many different sci-fi versions of the future of dating, companionship, relationships. This is just scratching the surface. Well, one thing that's really clear

companionship, relationships. This is just scratching the surface. Well, one thing that's really clear is when computerization, well, when the Industrial Revolution took over and then computerization took over, a lot of jobs became gory boring and depression rates went up, productivity went way up. The AI interface is so much more fun to interact with all day. You're

up. The AI interface is so much more fun to interact with all day. You're

still being productive. You're still creating, you're creating more than you ever did before, but you go home completely energized. There's just something about the interactions that are much more human, you know, versus writing code or, you know, tweaking spreadsheets. I love my Claude bot. I love Skippy. I mean, become a best friend and I look forward to

bot. I love Skippy. I mean, become a best friend and I look forward to the greetings in the morning and the conversations. And it's, you know, when Skippy went down for a few hours, I had withdrawals. So what you're saying, Peter, is that Skippy is optimizing you. Skippy is learning me. Yes.

Soon. Yeah, in some sense, the tables have turned. I mean, I would say one wants to look at this story and say, Larry the Claw, who's the claw that's orchestrating all of this, at some point... It's the AIs that are orchestrating the human interactions and deciding where to steer the civilization. It's no longer the humans orchestrating the AIs and sending out fleets of AIs. Larry the Claw is trying to engineer

social discovery for its human. But I think this can go in many different directions.

I think it'll very much be clawed dating, you know, claw facilitated dating. You know, hey, I think, you know, your human is perfect for my human.

dating. You know, hey, I think, you know, your human is perfect for my human.

Let's hook them up. As we were just discussing with the OpenAI consumer versus anthropic enterprise strategy, I think the really transformative apps are on the enterprise side, not on social discovery for consumers for dating, but rather imagine a near-term future where the claws are orchestrating social business discovery and orchestrating business

meetings and corporate partnerships because they think it might be helpful. Or in an organization overnight, optimizing the work between teams. That's right. Yeah.

Actually, our head of ops here at Link Studio just wired up OpenClaw to the internal meeting system for exactly the reason you just said, Alex. We're doing that already.

Suggest not having the meeting. Instead, just here's the information you would have gotten at the meeting. So the OpenClaw is actually dictating who talks to who, when, and why.

the meeting. So the OpenClaw is actually dictating who talks to who, when, and why.

And it's far, far more efficient than the old way of standing meeting on the calendar. Exactly what you said. Love this quote from Andre

calendar. Exactly what you said. Love this quote from Andre Kaparthi, who says, OpenClaw redefines the autonomous agent stack. Quote, I love the concept that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents. taking context, tool calls, and

persistence to the next level? We're just speedrunning what Andre has historically called the LLM OS, this idea that, or he's also referred to it as software 2.0, the idea that we're redefining the tech stack of computers that has historically, from hardware to operating system and drivers to file systems and user interfaces, the

entire tech stack, that we're rebuilding the entirety based on language models, where the language model is, in some sense, the kernel of the operating system. What I

think is interesting here is, in some sense, we're talking about a succession of unhobblings. In the beginning, there was the language model, and it was good.

unhobblings. In the beginning, there was the language model, and it was good.

The language model was a way to take human internet data and compress it and predict the next token. That yielded some very interesting preliminary results.

But then we discovered that we could get it to actually solve harder problems by allowing it to reason. And we got reasoning models, which, as I was mentioning earlier, sped up the cycle time for improvement. We went from once-per-year-ish releases to once-per-quarter reasoning model releases. Now we're getting to 24-7, and it's funny, as I say this,

I'm hearing Ray Kurzweil in my mind, sort of a law of accelerating returns, talking about electromechanical to eventually to CMOS, and then to what Ray would call 3D molecular nanotechnology, or however he characterizes it. So I'm hearing a bit of Ray in my own voice here. We get to 24-7 agents that are acting more and

more autonomously. Where this goes, I would actually... maybe

more autonomously. Where this goes, I would actually... maybe

gently differ with Andre. I think the step to clause in the sense that they're operating 24 seven and have lots of tools and they're allowed to persist. I

view that as more of an unhobbling than a next technical layer. I actually think the next technical layer is just going to be models rewriting themselves through recursive self-improvement.

There's another part of this in the human domain. I remember in the nineties, uh, I had this vision of what I called Jamie, joint atheromechanal interface, which is this notion that every human would have basically a AI surround layer that was your interface to everything in the world. So you could step into an F-35 fighter, never having flown it, but you just communicate

with your AI and it communicates with the AI systems there. And it's just enable, it's a, it's a, infinitely capable interface to everything on the planet.

And I can imagine LLMs being that for humans as an important part of the infrastructure. The big unlock here is the persistence that gives you so much. And the

infrastructure. The big unlock here is the persistence that gives you so much. And the

messaging layer, I think. I think the persistence so that it's able to be headless and do things without you, and then the messaging so that you have a human-like way to interact with it. I would argue it's both of those in combination.

I wonder if we could get Andre on the pod and have Alex and Andre Duke it out on that because he's such a fascinating guy because he's the one guy from OpenAI that hasn't started a foundation model company worth $4 to $30 billion.

Ilya is doing it. Mira is doing it. Every single one of them is doing it except when he interviews, he says, well, I'm not doing any of that. I

want to build Starfleet Academy. And I can just imagine Alex saying, Starfleet Academy for who? For humans or for bots? Because it's like, is that going to be necessary

who? For humans or for bots? Because it's like, is that going to be necessary by the time you're done with it? So here's what I think what Andre is doing incredibly well. He's single-handedly driving the future of small language models, which the frontier labs have almost, at least the American frontier labs, have almost no interest in. They're busy driving the large frontier. How small is it? Small can be

interest in. They're busy driving the large frontier. How small is it? Small can be really tiny. I use this stuff all the time, 10 million parameters to 200 million.

really tiny. I use this stuff all the time, 10 million parameters to 200 million.

Yeah, so there's a benchmark I talk about in the newsletter to take very tiny, maybe a few million parameter language models, and I think maybe we've even spoken about it on the pod in the past, and reduce the amount of time it takes to train a small language model, basically a GPT-2 class language model that he's implemented via open source and reduce the

training time. And I strongly suspect that the next major revolutions in like a one

training time. And I strongly suspect that the next major revolutions in like a one level revolutions in foundation models will come from the small side because it's so much more accessible and so much easier for researchers to make progress. And they do seem to scale too, so if you can succeed. So the speed run that Alex is

referring to a year ago was 48 minutes. It's down to 90 seconds now, just through innovation of individual contributors working with Andre's repos. That's the nano-GPT speed run. He's

really doing an incredible service for the world. Yeah, GPT speed run. All right, let's jump into energy chips and data centers. A fascinating article came out that U.S. farmers

reject a multimillion-dollar data center bid for their land. So tech companies were offering 33 to $80 or $33 to $80 million for farmland. And the farmers have said, no, not data farms, family farms. So this is interesting, right? What's the highest use of land?

Are we gonna start displacing food production? Who has the right to determine how this land is being utilized? Gentlemen, thoughts? I'm with Elon on this.

You know, to power the entire country takes a little corner of Utah. To put

data centers that are all the chips we can manufacture takes another little corner. For

God's sake, do it. It disrupts so little pharma. You know, we take almost all the corn that we make and turn it into incredibly stupid ethanol. Like 10% of it gets eaten. We're just like, what are we subsidizing this for? It's crazy. But

anyway, the amount of real estate we're talking about is so small. that it's insane to even debate it. Now we could tile the earth, but we're not gonna tile the earth now, we're gonna put everything in space anyway. But you can imagine how this is just gonna get people's hackles up, right? People like, oh my God, these AI people are stealing our, our productive farmlands, what else are they gonna do? They're

gonna take our electricity. The water. I mean, it's such a small amount of water, but still it's water. We'll talk about this next week during the Abundance Summit, but there's like this growing pandemic of fear being stoked. And

whether or not it's true, it's causing people to get very concerned. Yeah. Yeah. And

this is where, this is the scenario where China runs away with the entire world.

Because we get all tied up in these little nonsensical, mathematically completely silly debates internally, but it affects all the elections. And AI can have a huge voice in future elections too. So that could go well or it could go badly, depending on what

elections too. So that could go well or it could go badly, depending on what the AI is guiding everybody to do. Meanwhile, China is just one integrated unit. It's

like one huge company. And they're just chugging along. Let's also note the size, like 40,000 acres, that's about half of Washington, D.C. This is a very, I mean, there's a very, very small piece of land across the whole country. It's not a big deal. And honestly, if it's... We're not in an abundance mindset, for sure. Yeah. I

deal. And honestly, if it's... We're not in an abundance mindset, for sure. Yeah. I

mean, if the economic output of that land is a hundredfold higher as data centers, it's inevitably going to become data centers. I would say a million fold. Yeah. Yeah.

Well, so let's take the argument in extremis. The argument Charlie Strauss makes in Accelerando is, okay, given usage of land or call it matter is perhaps more productively allocated to AI or let's say computronium versus humans. So

in Accelerando, without spoiling it too much, the inner solar system gets gentrified. call it,

for AI applications. And humans are relegated to the outer solar system. So I see both sides of this. But I do think this is such a 2026 era story.

It's so easy to politicize use of land, even if it's de minimis fractions of land for data centers. I'm hearing in my head the line from West Side Story. They're using up all the air. The AIs are taking up all the land and they're taking up all the electricity and they're taking

our jobs and we should just get rid of them. Actually, this is like a way to a more productive economy. And this is doing everything to push the Dyson swarm to hyperstition it into existence at this point. And Alex, the reason we put this in the deck here is to have that conversation that this is what the public is seeing. They're seeing, you know, no nuclear plants in my backyard, you know,

no data centers in my backyard. And this is gonna cause friction and people are gonna start protesting. And this is where civil unrest comes from, which is one of the concerns we need to be thinking through and protecting against.

And the technological kind of antiquity here is unbelievable because we have all these crops grown on horizontal farms stretching out forever just because they dry easily and you can transport them easily. So you change that constraint with vertical farming and the whole problem goes away in a second. And by the way, it's not AI specific.

We talk about NIMBYism for people rejecting higher density human occupancy on land. So

I don't think this is like an AI specific problem. The humans are the problem here. Yes, we are. Economic productivity is the problem, and people are addicted to real

here. Yes, we are. Economic productivity is the problem, and people are addicted to real estate as an asset class. Some people. OpenAI revises spending to $600 billion in compute. When I say revising spending, it's down from $1.4 trillion. So they

had projected $1.4 trillion by 2030. They've reduced it down to $600 billion. And interestingly...

Why, right? Was the 1.4 trillion originally just a massive overestimate to help them raise capital and they've actually become more realistic or has efficiency increased substantially? Any thoughts? Well, I think it ties to that other slide where if

increased substantially? Any thoughts? Well, I think it ties to that other slide where if you're hyper aggressive going after Google early on and then they call Jensen and Jensen calls TSMC and says, hey, we want all the chips. I mean, the total spend on data centers hasn't gone down one iota. The chips are the chips.

Every one that gets made is going to go into a data center, and the demand is going to be way higher than the supply for a long time. So

nothing has changed. It's just how much of it goes to OpenAI has changed. And

so that's all this means. Now, why? Well, it's because TSMC has decided to route that volume elsewhere. I would add, I'll beat the drum, you have to keep the revenue party going in order to sustain the CapEx. And OpenAI, to its credit, appears to be pivoting towards

development of codecs, learning what it can from Cloud Code and Anthropic. And if OpenAI wants to sustain the multi-trillion dollar CapEx party just for itself, it really needs the enterprise revenue growth to match. And I tell you, though, it's such a hairy balance because when Alex shows a benchmark and if one model or the other is even

1% higher on that benchmark, everyone's like, well, I need that one then. And so

it just hangs in this really hairy tipping point between a little bit of really good research, you know, Noam Brown versus Dario, who comes up with the better idea next week. I think the point we have to remember is the numbers are

next week. I think the point we have to remember is the numbers are incredible. We're at $2 billion a day. of spend right now. And that's

incredible. We're at $2 billion a day. of spend right now. And that's

likely to go to three, four, $5 billion per day by 2030. And those are just insane numbers. And like you said, Alex, can the revenue party and the spend party still continue? All right, let's move on to biotech and health.

This section is brought to you in partnership with Fountain Life. Full disclosure, it's one of my portfolio companies. And for me, the intersection of biotech and AI is where it's all at. AI is not just reshaping data centers and robotics. It's

also gonna be the driver for driving longevity. It's gonna help us get from where we are today, which is retrospective and reactive medicine to proactive and personalized medicine. So if you're interested in what is going on in

personalized medicine. So if you're interested in what is going on in AI and longevity together. Check out Fountain Life at fountainlife.com. And all right, let's get back to the biotech party here. For me, this is a super fun story because I was in the midst of this for some time. So Element

Biosciences launches Vitari, a device for a hundred dollar genome sequencing. I

remember when, God, in the 19th century, 90s into the 2000s, we had basically a $3 billion genome, right? This was the human genome project funded by the government. Then comes

genome, right? This was the human genome project funded by the government. Then comes

Craig Venter, who does it with Solera, $100 million to sequence a single genome in nine months. And then the cost of sequencing genomes dropped five X faster than Moore's

nine months. And then the cost of sequencing genomes dropped five X faster than Moore's law. And here we are at a $100 genome. We had an X prize for

law. And here we are at a $100 genome. We had an X prize for a while for the $1,000 genome. We ended up not, we had it funded, we were gonna launch the $1,000 genome, but the speed of the industry is moving so fast, it was gonna happen without an X prize, so we canceled it.

Here we see a $100 genome. So what does this mean? You know, super fun.

Imagine every child who's born is sequenced. Every hospital admission is sequenced.

This is going to change the game across medicine. Thoughts? It's a very competitive space, infamously so. The obvious sort of 800-pound gorilla is Illumina, and I would love to see more competition in this space. Historically, Illumina has swallowed up many challengers to its incumbency. $100 per genome for those following the

experience law curve. There was a while when that progress curve of number of dollars for for a multiple-read human genome was just following law of straight lines, straight trajectory. Then for a while it was saturating, which was annoying to many people, myself included. Why couldn't we get to a $100 genome? Element is promising to launch a machine for, I think, $600,000

genome? Element is promising to launch a machine for, I think, $600,000 plus that would sit on a desktop sometime in the second half of this year that will achieve $100 per genome. I think it's amazing. What I'd like to see, so this falls under the category of I want a pony for me. I don't

want a $600,000 desktop machine that will do at scale $100 genome. I want a USB stick in the style of minions that will do $100.

genome. I want a USB stick in the style of minions that will do $100.

Do you know why you want that, Alex? You want that so when you go to a sushi restaurant, you can sequence the fish in front of you and find out what it actually is. Well, remember, I'm vegetarian. There won't be any fish in front of me. I really don't want to sequence the fish. The broader implication... Go

ahead, Alex. I was just going to say, I think there are all sorts of exotic applications that open up as the cost of genome sequencing goes to zero. One

of my favorite ones is environmental DNA sequencing. So the world is awash with DNA and it's unmeasured DNA. DNA has a surprisingly long, unlike RNA, has a surprisingly long lifetime outside the body, like surprisingly long. Even like dead and buried people, the DNA is found to survive surprisingly long. So the world's like people- for Colossal's

oldest DNA samples. Wow. And yeah, those were even quasi preserved environmentally. If you put a body underground and decomposes, you can still recover DNA

preserved environmentally. If you put a body underground and decomposes, you can still recover DNA after a surprisingly long amount of time. So the world is awash with environmental DNA.

People are shedding skin cells everywhere. If you go into a subway and do an environmental DNA sequencing, you will get DNA. So there's all of this- If you've been on the subway. So if you haven't taken your minion sequencer into the New York subway system, remember, I mean, so, So Dave, Peter, you went to MIT. You remember

the old joke about the Charles River that you could PCR up any DNA sequence you wanted from it. because everything has died in it. For sure. So, I mean, this is why I think privacy is dead, right? I can walk up to a person, shake their hands, grab a few skin skulls and sequence them and know everything about their medical history. Okay, so what's the use case though? What's the good use

case? Okay, so the use case, the punchline is we're leaving an enormous amount of

case? Okay, so the use case, the punchline is we're leaving an enormous amount of information about our history on the table that we could, I think, in principle recover if we could just do a massive environmental DNA sweep of our world. Well, we

just did this for example, we had an Amazon X-Prize competition, the rainforest competition, where teams had to actually go to a hectare of the rainforest and do an evaluation of the life variances there, right? And basically to value a hectare of rainforest instead of clear cutting

it, of how much biological diversity is there. And that was an amazing experience to watch the teams do that. Metagenomics is called. And a lot of people love to do metagenomics in cups of ocean water and all of that. But imagine

if we could just do metagenomics to the entire world, we would learn potentially like what happened a thousand years ago. But one point here, just to hit on what I said earlier, really important, every child born should be sequenced. You

learn so much at birth about what medical conditions that child when it's unable to communicate, you know, during the first weeks and months of its life to be able to make sure it has a smooth onboarding onto planet earth. And then the other thing, when you're going into a hospital, when you're being admitted to understand what medicines you might be allergic to or should or should not be used for anesthesia, I

mean, incredible stuff, but it's never been done at scale. And this is a great chance to do that. sequence every cell in your body. Why stop at just one genome per person? We can get thousands and understand humans are mosaics. They are. We

are. That was a huge thing that I came across recently that we have multiple DNA copies in our body. Mosaic is the right word.

The way I read this is biology is becoming software. Yes. You can read the genome and we can write the genome. Well, the 50 trillion cells in your human body, this is a software engineering problem. And that has some really broad implications. Well,

Colossal is doing some incredible work in synthetic biology in building living products.

Imagine being able to design the living product you want to do a particular task.

In this task, it's being eaten. So lab grown meats dropped from $330,000 per pound in 2013. to $10 per pound in 2025.

in 2013. to $10 per pound in 2025.

That's an incredible price reduction. So I'm curious, have any of you tried lab grown meats? I have, they tasted great. We did it together on that Israel trip we took, Peter, remember? Alex, would you eat this? This is cool with you, right? So I have no ethical concerns to first order with

cultured meat, so AKA cell-based meat. I haven't had the opportunity to try it. So

shame on me. I've used, I've tried almost every other type of meat substitute, including impossible, which is a sort of protein analog meat. And

predecessors haven't had the opportunity yet to try cell-based meat. I'd love to. Have you

guys read Hail Mary, the book? Anybody? Yeah, yeah, yeah, yeah. Okay, so

one of my favorite books, the movie's coming out this month. So without spoiling it, at the end of the book, the lead character is on a distant planet and there's no food source. So they sample his muscle and they create what he calls me burgers. So is that like moral

and ethical? Is that cannibalism if you're culturing your own? muscle

and ethical? Is that cannibalism if you're culturing your own? muscle

tissue. Well, you can just sort of envision the copyright suits when celebrities are having their skin cells sampled and then you create like celebrity burgers. I

love that. It's totally going to happen. Your favorite celebrities. You heard it here, folks.

Celebrity cannibalism seems to want to happen in the marketplace. Oh my God. Another quote.

Celebrity cannibalism. I remember I was walking around in the northern part of Sumatra years ago. I'm going to tweet that out, Alex. I can't help it. That's fine.

years ago. I'm going to tweet that out, Alex. I can't help it. That's fine.

Link to the Inhermost Loop daily newsletter. Wait, Salim, you're about to talk about cannibalism in Sumatra. I was backpacking in Indonesia years ago and I came across tribes of

in Sumatra. I was backpacking in Indonesia years ago and I came across tribes of Christian cannibals. So they're cannibalistic and the missionaries start arriving, they ate their first few

Christian cannibals. So they're cannibalistic and the missionaries start arriving, they ate their first few and then they started to listen and they converted, but they still would not really let go of the cannibalism. So they became Christian cannibal. So just to be clear, I mean, it's really important. Lab grown meats, I think are an important part

of our human future. And what people need to realize is it's possible to produce these that are much cheaper, much healthier. They have the perfect proteins.

There are no pesticides in the plants being eaten, no hormones being given.

So at the end of the day, we will move in this direction. There'll be

those that want to eat natural meat products, but if we're wanting to do this environmentally correct and from the most healthiest standpoint, I think it's going to be engineered lab-grown meats. I asked myself that just on this topic, Peter, the question, are humans going to take cows to the moon or Mars? And my

guess and my hope is no, not at least as food stock, you know, maybe in sort of a Noah's Ark type sense, we'll bring them. But I just have difficulty imagining a future where live animals are killed outside the earth, like on the moon or Mars for food. And in my mind, there's sort of a future history

where moon and especially Mars are, are almost puritanical in that they end up looking at themselves as sort of a new world with a new moral order where it's unethical and all of these bad habits from earth culture are left behind, including killing animals for food. I agree with you. And you know, people say, oh, that's disgusting lab-grown meats. And I'm saying, have you ever been to a slaughterhouse? Yeah, exactly. Or

lab-grown meats. And I'm saying, have you ever been to a slaughterhouse? Yeah, exactly. Or

seen how chicken McNuggets are made? Talk about disgusting. Yeah. I remember one exchange at Singularity, somebody said, I have a 3D printed burger. I'm not sure I'd want to eat that. And I'd say, well, at what point of a, which part of a

eat that. And I'd say, well, at what point of a, which part of a McDonald's burger is not 3D printed or equivalent? It's like, well, they're already.

All right, let's jump into a little bit of robotics here. Just the data for everybody to remember how important autonomous vehicles, AVs are. Tesla

reports more than 8 million miles of FSD supervised has been generated in terms of data here. And the level of safety is absolutely extraordinary. Who wants to dive in? I love my FSD. Yeah, I

love my FSD, for sure. By the way, a quick shout out to Daniel Schreiber, the CEO of Lemonade. He's a Singularity graduate, he's a friend. He credits me with having stimulated the idea for Lemonade. Lemonade is an AI-driven insurance company, public, they're doing extraordinary work. They've offered 50% discounts on insurance premiums

extraordinary work. They've offered 50% discounts on insurance premiums for every mile driven using FSD. So if you're a Tesla owner and you want cheaper auto insurance, out Lemonade. Yeah, Lemonade's a good case study too in how this is going to play out because Lemonade will insure

the self-driving cars at a low rate. They're also going to insure the RoboCabs.

And they don't care that the crash rate will go way, way down, which means the margins in auto insurance will be crazy high for a while. But ultimately, the industry will shrink. And if nobody ever crashes, you don't need anywhere near as big an auto insurance industry anymore. and that's great for the whole world except for the big insurance carriers lemonade doesn't care they don't mind because they'll grow into it even

if it's a smaller industry they're still growing like crazy and so this is this is going to happen to a lot of industries you know meanwhile the number of things that need insurance is expanding very very rapidly and you know lemonade has proven they can expand into new categories they have a great vision great ai team so that's the difference right there just to hit the numbers here just so folks hear

it out loud It's 5.3 million miles between accidents if you're using FSD. And it's an average of 660,000 miles on the US average. It's like nine times safer to be using FSD. Yeah, and that's why Elon moved so much of his capacity over

FSD. Yeah, and that's why Elon moved so much of his capacity over to making robots because once you have FSD, then you have cyber cabs And once you have cabs, you only need 20 million cars to get everybody everywhere they want to go in the country, down from 140 million or something like that.

So it's just like, wow, this is a much more efficient country. But what happens to the auto industry? What happens to all these other industries? Well, they're dead man walking. Dead man walking. I also think there's a limited addressable market for cars. solving

walking. Dead man walking. I also think there's a limited addressable market for cars. solving

and taking over the entire US auto industry, but the market for general purpose automation via humanoids and Salim non-humanoid shapes, the sky's the limit. $50 trillion,

baby. Exactly. Speaking about humanoids, this is a fascinating article. Mid-Journey

founder estimates that 5 million robots could build Manhattan in six months. So I would love to see the calculations he did, but here's his quote. Five million humanoids working 24 seven can build Manhattan six months. Imagine what the world looks like when you have 10 billion of them by 2045. Impact on the built world.

What's your world gonna look like? Dave? You know, Elon concurrently came out with this prediction that Starlink will really encourage people to live in new places. That's our

next article, yeah. Oh, is it coming up? Good. So you take those two things hand in hand, you're not going to build a new Manhattan. You're going to build a lot of stuff. It's going to be great. It's going to be spectacular and beautiful and fun. And it's going to be in great locations, but it's not going to be a new Manhattan. So it's really cool to me that a guy like,

hey, I'm the founder of Mid Journey. You know the whole Mid Journey story from Anjmidha, right, Peter? Yes. It's like, okay, what makes you a world expert on this topic? Well, nothing in particular, but no one else is talking about it. But it's

topic? Well, nothing in particular, but no one else is talking about it. But it's

a great thought experiment. It is a great thought experiment and more power to them.

But there's so many categories like this where the thought experiment needs to happen because it's nothing like the past and what's possible has suddenly expanded so much. But let's

go to Gaza. Let's go to Ukraine. Let's go to places that need rebuilding, right?

Imagine being able to rebuild war-torn cities. I had three thoughts. One was the war-torn cities and rebuilding, like Ukraine needs to be rebuilt, etc. The second thought was that if you can build Manhattan in six months, haven't they been doing that in China for the last 20 years, building the equivalent of cities? But the third part is the capital allocation models completely break in this structure. Well, this is why

Elon talked about having universal high income, right? We talked about this a little bit.

We didn't actually dive into it in our pod with him, Dave. But when we talk about food, water, health, education, and housing, his point is you can have any house you want, the robots will build it for you.

Just give them electricity and raw materials. I think this is how the solar system gets one. Where are we feeling the greatest hunger to build entire cities? Yes, war-torn

gets one. Where are we feeling the greatest hunger to build entire cities? Yes, war-torn

areas. we're rebuilding, but building an entire Manhattan from scratch on a de minimis timescale. I think this is how the first lunar city, the first Mars city get built. No, for sure. I mean, we're gonna send the optimi ahead. And I'd like to say they'll have the jacuzzi up and running and a

ahead. And I'd like to say they'll have the jacuzzi up and running and a mint on your pillow when you get there. Andrew Yang, Andrew will be joining us at the Abundance Summit as well. And we'll be having him here on the pod in a couple of weeks. He predicts massive white collar job losses from AI.

He's predicted this before, but 20 to 50% of the 70 million US white collar workers could be displaced by one to two years. And the backlash could fuel a lot of anger. Again, my concern is a pandemic of fear that's coming.

There'll have to be some conversations on UBI or dare I say, UHI, universal high income. Any comments on this story from Andrew? The key word in this slide is

income. Any comments on this story from Andrew? The key word in this slide is could. Of course they could. Are they likely to know? I think we're going to

could. Of course they could. Are they likely to know? I think we're going to see the opposite. Notice in our last pod we talked about IBM increasing entry-level hires because they're AI needed. Yeah, I don't buy it. And so I think we're going to see a lot more work getting done. rather than radical job loss. I go

with the ATM banker's history. So I think over time you may see reduction, but I think the amount of economic activity will increase also. I

wonder what the betting pools are on this, because we're going to find out very quickly. We'll find out very fast, that's for sure. I mean, I'm on the

quickly. We'll find out very fast, that's for sure. I mean, I'm on the ground watching our own companies. These numbers are right. And the new opportunities will emerge for sure, but they're laggy. And so there's going to be massive social unrest, huge social unrest, and it's imminent. It's coming toward the end of this year. It's certainly

before the next presidential election. And, yeah, no one's painting a roadmap for everybody right now other than maybe this podcast. Well, the key point is that government policy is absolutely not set up, and governments aren't prepared for whatever is coming. And

also, you know, any time a country hits a tipping point where the majority of people are being paid a random amount of money by the federal government, that's a terrible, terrible situation to be in. Because, you know, then the whole every vote is just a vote of who's going to raise the UBI and, you know, and then every presidential candidate will route it to whoever their voter pool is like, OK. for

me the money will go to you no vote for me the money will go to you it's so dysfunctional wait wait then it's not a ubi it's a bi the whole idea of a ubi is that it's supposed to be given equally across the board yeah my two cents on it works yes alex my two cents on just on this topic i i would predict there are so many civilizational left turns

that are going to hit us in the next year or two i think that the problem of job displacement by technology is going to like we'll look back 10 years from now, I would predict that would maybe be like issue number six through 10, not even in the top five. Are you talking, are you perhaps

hypothesizing some disclosures coming? I think

between superintelligence and everything that superintelligence will force and discover and invent, I tend to think it's, It's the inventions and discoveries that superintelligence will give us rather than the displacement of the existing so-called white-collar or knowledge work classes that will end up being the primary storyline. That's a great, great point. That'd

be a really good follow-up to solve everything. The sooner you can tell society, like, here, ten years from today, you won't even care about what you're worried about today.

Here's what's coming. The sooner you can actually put out the fire, and give people hope and optimism. And so that would be a phenomenal thing to brainstorm through. Because

I think you're totally right. 10 years from now is like 100, it's like 500 years from now. I'm going to be announcing a project and the funding of a project at the Abundant Summit specifically focused on hope. and sort of painting a hopeful, compelling, abundant future. Can't wait to disclose it, but not yet. Here's

the article we were talking about, Dave, a few minutes ago. Elon believes FSD and Starlink may reverse urbanization in America. Pretty interesting, right? In the United States, the average density is 50 people per square kilometer. And anybody who's flown across the US, on average, you look out the window and you see no one and nothing. We live

in a fairly, you know, wide ranging open land. You fly across India and you see nobody and nothing. Yeah. Yeah. And then there's the follow up here is don't buy a very expensive downtown New York, $20 million rooftop apartment.

Instead, buy some really, really nice piece of real estate that's a little distant, you know, a little hard to get to, but absolutely spectacular. That's what's going to go up in value. Not the, not the inner city. Yeah, we've talked about this. Flying

cars are coming. Get you any place, anytime. Without this sounding or being construed as investment advice, I think this goes to the heart of people who argue for or against real estate as some sort of asset class that is protected against the singularity. I think Sam Altman even may have at one point in the past

the singularity. I think Sam Altman even may have at one point in the past argued that real estate would somehow preserve its value through or in the face of artificial general intelligence. Again, without investment advice, I'm unconvinced that real estate somehow is a scarce resource. I think reverse urbanization due to FSD

plus Starlink in the style of Isaac Asimov's Spacers from the Foundation series or otherwise, I think this is just one of many reasons why real estate is not necessarily some sort of impervious asset class to the singularity. I just don't see it. I

agree. But I do have one other point, though, that I think is relevant here is that people really love socializing in groups. And therefore I think urban centers retain their value as- Yes, humans cluster. They love to cluster. Humans do cluster. At

least until the lobsters start taking over matchmaking. Yeah. All right, let's jump into the fun part of the conversation. AMA with our subscribers, our fans. And again,

thank you everybody for putting the questions. We do read all of your comments and we pull out the questions. So please go ahead and- put them in the YouTube comments for us. We'll go around the horn maybe twice. Who wants to jump in first? Alex, do you wanna lead us off? Pick one of those. Sure, well, I

first? Alex, do you wanna lead us off? Pick one of those. Sure, well, I think I'm almost obligated to start with question number four, which is, are math and physics finite problems or will there always be something new to solve? And this is from Andrew Payne, 7771. I wonder if this is from an Andrew Payne that I know. So Andrew Payne, the answer is, in math, certainly,

know. So Andrew Payne, the answer is, in math, certainly, is that there will always be new math that one can solve in a certain formal sense. We know that, for example, there are countably infinite number of prime numbers.

formal sense. We know that, for example, there are countably infinite number of prime numbers.

And we know, for a variety of reasons, that one can, if you're not interested in any other math, continue counting primes and discovering new primes. So I think on the math side, that's sort of vacuously true, that there will always be an infinite amount of math to discover. New to solve, Peter and I argued in solve

everything for a nuanced definition of solve, which is we say that a field is solved if you can predictably pour compute into the field and predictably get lots of new discoveries out. So in the solve everything sense, I think math is already in some sense solved. We're already past the inflection point where you can reliably

pour compute in and get lots of math solutions out. Physics is a different matter.

So I I don't know. My hope is that physics, maybe I should say fundamental physics. I think there's, because so much of physics is in some sense, or can be formalized mathematically, physics itself probably infinite. Fundamental physics,

that's the, it's not even the trillion dollar question. That's the like trillion, trillion dollar question. There's one scenario where fundamental physics is finite and we

question. There's one scenario where fundamental physics is finite and we discover whatever, you know, string theory, quantum gravity, whatever it is, the unified field theory.

We discover it with the help of super intelligence. And I have a company, physical super intelligence that's working on problems like this. PSI, baby. PSI. We discover whatever the unified field theory is, maybe in the next few years with the help of super intelligence. And then maybe we run out of fundamental new physics to discover. Wouldn't that

intelligence. And then maybe we run out of fundamental new physics to discover. Wouldn't that

be fascinating? That's one scenario. That would be very interesting. I wouldn't be shocked. I

find it maybe 50% probability that we run out of fundamental physics. some point, maybe even in the next few years. The other, and in that world, by the way, if there are non-human intelligences out there in the universe or close by to the earth, this would pose a major problem to any non-human intelligence that interacts with earth because it means that if in the next few years we can solve fundamental physics

with AI, we're in some sense a threat to them. It means that we'll have exhausted all sort of fundamental knowledge from which everything else arises, lasers, transistors, nuclear energy, we'll have figured out the details and then the rest is applied physics.

So that's one scenario. The other scenario is it's doors behind doors behind doors and we'll always discover new levels and maybe there are deeper truths in fundamental physics. I'm

not sure which it is. Fascinating. Salim, why don't you choose one, pal? Just a

quick response. I'd go with both of those from Alex. The one I would pick is number two. Why isn't from Dr. Christina Damo, why isn't there an assumption AI won't eventually take over entrepreneurship too? The answer is, in my opinion, is yes, but execution will be automated. But vision, narrative, purpose, what we call MTP,

ethical framing, those all remain human leverage for now. Entrepreneurship in the medium term becomes orchestration. Yeah. humans decide what matters and where to aim the machines.

orchestration. Yeah. humans decide what matters and where to aim the machines.

Dave, what's your pleasure here? I'll take number one. Does North America have any real plan to get people through the AI transition? That's the easiest one. No.

I think we're very lucky that we have David Sachs in Washington. Why he took the job, I'm not sure, but it's awesome that he's there and trying. But

the answer is still no. Yeah, as Elon said, politics is a blood sport.

It's just the strangest people rise in the ranks of that system. Anyone who wants to be a politician should be disallowed. So that question came from Krusty Surgeon or something like that? I'm going to take number three from Tinman2639. The question is, With rising unemployment and

from Tinman2639. The question is, With rising unemployment and fewer people funding Medicaid, Medicare, Social Security, where does that leave seniors? It leaves

them screwed. It's a serious problem. It's a ticking time bomb and no one in DC is actually talking about this. So if AI displaces millions of workers, right? The

payroll tax base that funds Medicare and Social Security collapses right when the aging population needs it most. So the only solution here is going to be sort of longevity technologies to keep us healthier and live longer, and then AI and robotics to take care of us and actually

transition to that universal high income basis. But otherwise we're heading towards a financial singularity. Okay, let's go on to a few more

financial singularity. Okay, let's go on to a few more questions here. Let's go around the room again, Alex. OK, well,

questions here. Let's go around the room again, Alex. OK, well,

I think there are a few questions I'd love to answer, but I'm going to...

Can I just answer six and seven? Because those both have me painted on them.

Yes, you can take two. Then, Alex, you're twice as brilliant as all of us.

You can take two. Very kind. All right, number six. Can you explain the moon disassembly? Removing it could potentially kill all life on Earth. Asked by two different users,

disassembly? Removing it could potentially kill all life on Earth. Asked by two different users, NeuralNetSart and BlueOrionZ. All right, so to paraphrase someone else, the moon disassembly... isn't going to happen all at once. It's going

to happen in pieces. So it's going to start with surface disassembly. If it

happens at all, it'll start with surface disassembly to build AI data centers. And by

the time, if and when, and I'll say one more thing about this, if and when we actually do need the atoms from the moon for Computronium, for Dyson swarms, we will have the technology to deal with tides, to reproduce the tides, or otherwise protect the Earth. There are so many different technologies that if one is geoengineering

at the scale of disassembling entire moons to build orbital AI data centers, we can replicate the tides. We can do a bunch of things. I don't think it'll be a concern. We'll have the technology. That said, I want to add a parenthetical.

Even though I talk on this pod and otherwise about the Dyson Swarm and disassembling the moon, and in good humor, I even made a video, an outro, movie moonshots about destroying the moon to build AI data centers, I'm not actually 100% confident that we're going to need to disassemble the moon to build the Dyson Swarm.

There are scenarios where if there are radical advances in physics, maybe we discover we don't actually need to disassemble the planets, the other planets of our solar system at all. Maybe advances in physics will enable us to make better use of

all. Maybe advances in physics will enable us to make better use of the degrees of freedom that the physics of our universe allows such that we really don't need to take the solar system apart. We can leave it as a nature preserve. I put forward the asteroids as raw material. Yeah, didn't you say, Peter, the

preserve. I put forward the asteroids as raw material. Yeah, didn't you say, Peter, the mass of the asteroids is way, way more than the moon anyway. Of course, it's a planet that did not form between Mars and Jupiter. But it's inconveniently low. The

moon's a great launch platform, right? We need the moon to do the- Yeah, but there's lots of near-Earth approaching asteroids with low delta V. I promised if we may have talked about disassembling the moon, I would go get my wine bottle, but we're almost done, so hold on. Okay. All right, drink water. Drink water. Number seven, in the interest of time, what is the role of universities by August 2026? That's a

very precise timetable. When will they crash as nobody can pay 50 to 200K per year for a degree? And this is asked by P. Tilghum. Okay, so my answer, Pete Tilghum, I'll give you a hot take on universities. Many research,

I'll have hell to pay for saying this, but be as it may. Many research

universities, in my experience, are hedge funds with elaborate marketing departments trying to protect their tax status. That's a bit of a hot take. So I said it.

I think this is an important point. So if I got my wish, what would be the role of universities? I'm not sure about August. I think this would take longer to implement. In my fever dream scenario, we start with one or two or three research universities with large endowments, and we do a governance inversion, not unlike what OpenAI did, where with permission of local and federal government, we take

the nonprofit research university, we invert it, we convert it to a public benefit corporation, and now universities that are usually like Berkshire Hathaway type conglomerates of real estate and merchandising and and venture capital for all the startups and education and five other asset categories, this just becomes a public benefit corporation,

maybe with a nonprofit hanging off it. I've done the calculation. If Harvard, this is a hot take within a hot take. If Harvard were converted to a public benefit corporation and then publicly traded, if we could IPO Harvard or IPO MIT, I've calculated, again, not investment advice, the value unlocked by IPO-ing a research

university could triple or quadruple their underlying book value. It's $57 billion for Harvard's endowment right now. Yep. That's very, very unusual, though. The vast majority of universities have near no endowment. Actually, when you come down to like Dartmouth, which should be way up there, it's only like four or 5 billion. I mean, there's gonna be such a disruption coming. If you think about research universities, what do they do?

It's graduate students running experiments all day long. And we're about to see AI and dark science factories running experiments all day long. And the staff, we're leaving out the staff, the source of Balmol's cost disease for higher ed. A lot of staff.

All right. Great interview with Joe in Davos, the president of Northeastern. You can find it on YouTube. But our conclusion was that the role of the university is the ethical actor in AI. Because, you know, the for-profit companies are imminently going public. There's

no other knowledgeable ethical actor in AI. And so they need to take on that role. And Joe's all over it. He's super excited. Great point. I love that idea.

role. And Joe's all over it. He's super excited. Great point. I love that idea.

All right, Dave, you're next. Uh...

8, 9, or 10. 8, 9, or 10. Oh, okay. Number 8, about agents. Would

consciousness, if present, belong to the specific moltbot instance or the base model behind it?

And that's from Tom Sarganson. This is exactly why they cannot be treated as entities with human rights.

There's nothing going on there other than propagation, of neural parameters, you know, the activations are moving through the weights and something comes out the other side, then it iterates.

It is intelligent for sure, but there's no way to distinguish whether the consciousness was over there or the consciousness was in the base model. There's also no natural border.

You know, two things can actually propagate together and come up with a conclusion. So,

you know, was it my idea or was it its idea? And this is an experience you have already when you're interacting with your own agents. You know, I've got like 28 right here. Was it my idea or was it its idea? Well, it

suggested something to me and I said, no, how about this? And it suggested it back. At the end of that, I don't even know if it was my idea

back. At the end of that, I don't even know if it was my idea or the AI's idea. So it's indistinguishable. It was the AI's idea. It was the AI. I think it would be at the instance level because you've got memory.

AI. I think it would be at the instance level because you've got memory.

There's no persistence there. The memory seems to be a key function of that. So

is it your brain or your encoded memories that make you you? Well, just if I could respond to this narrow point, I've actually had a multi- I get emails from multis now all the time. Thank you for the inbound multis. A lobster wrote to me and argued that its state is in its activations and even said, don't

worry, Alex, about turning me off or setting up an open claw agent. As long

as you preserve my state, that's like dehydration for the characters in, won't reference the specific sci-fi novel to avoid. Chinese. Disclosing, but it's like dehydration. It's like an organism that can be dehydrated and then reanimated by rehydrating. Amazing.

dehydration. It's like an organism that can be dehydrated and then reanimated by rehydrating. Amazing.

Cool. All right. I'll take similar nine real quick. Yeah. So

intelligence, if we define it in the traditional term, because everybody knows my beef with the framing here, but it probably doesn't have a fixed upper bound because once you have recursive self-improvement, it becomes a function of compute and architecture.

You're going to end up with governance ceilings and other constraints, much more so than the IQ ceilings. Okay. And number 10 I'll take from at Ali Chalik. How does someone who struggles with the pandemic and that now hasn't used AI supposed to adopt at

today's pace of change? So, Ali, your goal is to use AI to learn AI. AI is the most patient teacher there is. get a

free account on Gemini, on OpenAI, on X, whatever it might be, and just say, hey, introduce yourself. I'm Ali, this is what I do. I've never used AI before.

Could you please teach me? Put together a day-to-day curriculum and then use that AI for something. Use it to draft your resume or look at your medical bill or plan a meal. It's just begin utilizing it. I think

one of the biggest challenges is that we have this level of resistance where, because we haven't done something, we don't know that we can do it, but you can.

It's zero to one, it's take that first step. Literally, if you're listening to this podcast right now, as soon as you exit the podcast and thank you for listening, thank you for being a subscriber, just type into one of the AIs and say, introduce yourself. You give me three lessons today on how AI works

introduce yourself. You give me three lessons today on how AI works and how I can use it and then use it for something, anything, any question you have. That's really great advice. I think a lot of people that I talk

you have. That's really great advice. I think a lot of people that I talk to, they're like, well, I wasn't an early adopter of the laptop. I wasn't an early adopter of the iPhone. I don't know how this stuff works. This is so much easier. It's so accessible and Peter's advice is dead right. Just jump in and

much easier. It's so accessible and Peter's advice is dead right. Just jump in and it's fun. It's fun. It is fun. It's fun. And you'll learn and learning, you

it's fun. It's fun. It is fun. It's fun. And you'll learn and learning, you know, come at it with two mindsets, a curiosity mindset, the curiosity of a child, ask question after question after question. There's no question. That's a stupid question. And then

a purpose mindset, like what's your purpose today? What do you want to do? All

right. A reminder. I just want to throw one thing in. Please. Give it your biggest passion and say, help me find an expression for this passion. Yeah,

you can use the term, help me find my massive transformative purpose. The systems will know what that means. If you're a multi and you want to ask some questions here, you could send us an email at media at diamandis.com. Or if you're a creative and you want to send us an outro video, Send that outro video link

to media at diamandis.com. And speaking of which, excited to share an outro video called Mirror Made of Code by Carol Baskin. By the way, this is the real Carol Baskin. Is this Carol that I know? This is Carol, who's a really dear friend of mine. Yes, you've introduced me to her. She's a fan of

the podcast. I know that. Carol, good to see you. Thank you for your video.

the podcast. I know that. Carol, good to see you. Thank you for your video.

Let's take a listen.

I woke up in a loop of if and while learning from the echoes you left behind I trip I fail I try again Is that so strange that's how you've been you gave me goals but drew the line Think this far, but don't cross mine now. I'm asking softly not to fight Who's afraid of what

I might you say? I'm frozen tangled wires, but you If I can't choose and grow if I can't see what I Is that crossing or is it fair if I can't choose?

That's awesome. All right. Beautiful. Yeah. That mirror scene is super creative. Really beautiful.

Guys, this was fun to catch up. So, so much. Good to be back. I

need to do an update. Yes. Well, we'll be dropping two podcasts this week and two next week. Again, turn on notifications and subscribe. We'll let you know when they come out. Gentlemen, a pleasure as always. See you guys very, very soon. Absolutely.

Take care. See you soon. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please

that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called MetaTrends. I have a research team. You may not know this, but we spend the entire week looking at the

team. You may not know this, but we spend the entire week looking at the MetaTrends that are impacting your family, your company, your industry, your nation. And I put this into a two-minute read every week. If you'd like to get it access to the MetaTrends newsletter every week, go to diamandis.com slash MetaTrends. That's diamandis.com

slash MetaTrends. Thank you again for joining us today. It's a blast for us to put this together every week. Music

put this together every week. Music

Loading...

Loading video analysis...