LongCut logo

GPT 5.2 Release, Corporate Collapse in 2026, and 1.1M Job Loss | EP #215

By Peter H. Diamandis

Summary

Topics Covered

  • AI Capabilities Shockingly Advanced
  • Knowledge Work Automated 11x Faster
  • Hyperdeflation Reshapes Economy
  • 2026 Corporate Collapse Predicted

Full Transcript

OpenAI releases GPT 5.2. The

capabilities are just shockingly different than they were a few weeks prior.

>> OpenAI has just unveiled GBT 5.2, which it's billing as its most advanced frontier model yet.

>> The value that we see people getting from this technology and thus their willingness to pay makes us confident that we will be able to significantly ramp revenue.

>> The fastest scaling consumer platform in history. We're almost at a billion

history. We're almost at a billion users. That just blows my mind.

users. That just blows my mind.

>> A lot of change is coming rapidly. I

think the the biggest challenge is people are not projecting properly on how rapidly this is going to tip.

>> I think 2026 is going to see the biggest collapse of the corporate world in the history of business.

>> In 2025, we had 1.1 million layoffs, which is the most since the 2020 pandemic. 71% of comparisons between a

pandemic. 71% of comparisons between a human performing this knowledge work and the machine resulted in the machine doing a better job at more than 11 times

the speed of the human and at less than 1% of the cost of the human professional. So knowledge work is

professional. So knowledge work is cooked.

>> Now that's a moonshot ladies and gentlemen.

Uh, speaking of alien creatures, I was touring with uh, Colossal yesterday. Ben

Lamb, I'm an adviser, early investor in this company, and Colossal is amazing.

Uh, they've got something like 12 different species at different stages of deextinction, right? They brought back

deextinction, right? They brought back the direwolf.

>> Uh, they're going to bring be bringing back the saber-tooth tiger. I can't wait for that. And, of course, the the woolly

for that. And, of course, the the woolly mammoth. uh they created the woolly

mammoth. uh they created the woolly mouse, right? So they've been able to

mouse, right? So they've been able to identify the genes that that in particular are different are different phenotypes, right? Like length of hair,

phenotypes, right? Like length of hair, length of snout.

>> And it's fascinating what they're doing.

Uh and their ability to actually find the closest living relative and then snippets of DNA. So they have DNA going back as far as 1.2 million years. They

haven't been able to get DNA older than that, but that's still pretty incredible.

>> But being able to actually like >> Yeah.

>> Didn't Ben say that we couldn't uh restore animals if the DNA was older than like 10,000 years?

>> Well, for example, the woolly mammoth DNA that they've gotten uh ranges from like 10,000 years to 1.2 million years, right? And

right? And >> Okay.

>> And they've got to identify that's not a single species, that's a whole spectrum of a species, >> right? because there's evolution going

>> right? because there's evolution going on all that time. And so they're trying to figure out, okay, what part of the phenotypes like the tusk and the woolly

mammoth hair and its cold tolerance and all of those things and they're reconstructing a single single room, you know, an approximation of woolly mammoth.

>> Anyway, the programs are amazing and and Ben is such an incredibly good CEO. I'm

excited. He's going to be uh one of our moonshot closing speakers at the Abundance Summit this year. So we're

going to go deep with how do you how do you go from 0 to10 billion valuation in four years and how do you do something and no bio background at all for Ben right he was the CEO of Hyper Giant the

software company incredible >> so your multi-armed robot can shear the woolly mouse and then we can make make sweaters in time for the holidays out of it very >> we can all wear them on the pod

>> by nonhumanoid robots >> all right I think uh I think it's time to to jump in with enthusiasm.

>> Yes.

>> All right. Welcome to Moonshots, another episode of WTF Just Happened in Tech. Uh

this is the news that hopefully impacts you, inspires you, gives you moonshot thoughts, and gets you ready for the future because that is one of our primary goals. How do we prepare you for

primary goals. How do we prepare you for what's coming next? Uh a lot of AI news.

Uh today is a special episode that we pulled together uh in order to celebrate the release of GPT 5.2, but we'll get to that in just a moment. I wanted to hit

on some of the top sort of like top level hyperscaler updates and battles.

So, uh just a few headlines here. We'll

be discussing them through the pod here today. Chat GPT was the most downloaded

today. Chat GPT was the most downloaded app in the iOS app store in 2025.

Congratulations to them. Uh they're

nearing 900 million active users. Gemini

is catching up. Uh Anthropic jumps to 40% enterprise share. Uh uh amazing.

Accenture is going to be training 30,000 people on Claude. Elon has let us know that Grock 4.2 is coming very shortly in the next few weeks and Grock 5 in the

next few months.

Uh we said in a moment ago, Open AI has released GPT 5.2. That's going to be uh coming up in a moment. And interestingly

enough, uh Google launched its deepest AI research agent the same day that Open AI dropped GPT 5.2. Uh a little bit of

PR battles going on between them all.

>> Uh all right. Uh one other piece of data uh on the downloads here to give people uh a look at the scoreboard. Uh GPT chat

GPT received 92 million downloads.

Gemini is at 103.7 million downloads and Claude has received 50 million downloads. Any

comments on on these opening headlines before we jump into uh GPT 5.2? Well,

I'm I'm in shock this week at the capabilities. We'll look at the

capabilities. We'll look at the benchmarks in the in a minute, but the benchmarks really underell the last two weeks. The the capabilities are just

weeks. The the capabilities are just shockingly different than they were a few weeks prior. Uh and we'll get into it, but uh also the big big change is

the race is on. you know, uh, when, um, you know, GPT5 kind of disappointed everybody, the poly market on on Google running away with the rest of this year

went to like 90 95%.

Uh, now, kind of as Alex predicted, uh, it's a closer horse race. You know, you know, Google's still on top of the stack, but apparently Sam had something in the tank and who knew. So, we'll

we'll get into that, too. But I'm just absolutely like, no, no exaggeration.

The things that I got done in the last week that I couldn't have done three weeks prior just coding and building things are it's just I'm I'm in shock.

>> So, um, are they pulling their punches?

We discussed that in the past, right, where they're releasing this much. They

know that, you know, uh, we're going to have Grock coming out next. So, let's

then release the next segment to compete directly there.

they are totally pulling their punches.

They've absolutely been holding back.

>> Uh I think because they're starved of compute and they're afraid to roll out, you know, addictive capabilities that they just can't deliver on. But you

know, Alex experienced this too. Like

yesterday we were, you know, going crazy with 5.2 trying to see what it can do and then it's like, "Sorry, you're done for today.

We're out. We're out of compute. Sold

out. No gas in the tank." And so the competitive pressure is forcing them to code red, you know, come out with things when they normally would want to hold back and wait until they can find the data center compute and wait until Chase

Lock Miller finishes Abalene and but they just don't have that choice with the competitive pressure on each other.

Yeah, m maybe just to to comment, I I think there at this point if if you're open AI and you have your purported code read and you're in a hurry, you're in a

bind 5.1 GPT 5.1 came out only a month ago and you need to to rush something to market to to uh put at ease perceived competitive pressures. I I think they're

competitive pressures. I I think they're only approximately three levers you have. So one lever to Dave's point is

have. So one lever to Dave's point is compute. you can increase the total

compute. you can increase the total amount of compute allocated to to given models and that that of course comes at a cost. It comes at the cost of compute

a cost. It comes at the cost of compute scarcities. It comes at the cost of

scarcities. It comes at the cost of longer response times to prompts. Second

lever that you have is safety. So you

can turn down the safety. You can make models more sophantic. Uh and that that's that's a way to improve, >> right? But can we get a benchmark on

>> right? But can we get a benchmark on sophantic model? There there are a bunch

sophantic model? There there are a bunch of benchmarks for >> compromising your ideals to win the market in general.

>> Yeah.

>> Right. So so call it the safety knob is the second knob that you can turn if if you're in a pinch. The third knob that you can turn is the post-raining knob which can be done on relatively short notice. So you can pick particular

notice. So you can pick particular benchmarks that you want to really post-rain your models to to do really well on. and and I I suspect all three

well on. and and I I suspect all three of these more compute maybe maybe not some some turns of the safety knob uh and then post-training on select benchmarks is exactly what we we're

seeing in this cycle now that we have a real horse race >> I found it fascinating we've got probably the most the fastest scaling consumer platform in history we're

almost at a billion users that just blows my mind >> it's starting to eat the operating system I mean like when you start to to get order of magnitude a billion downloads. At some point, you have to

downloads. At some point, you have to ask the question, is this AI user interface basically cannibalizing the entire OS itself? At what point sometime soon is every pixel that shows up on a

mobile device being AI generated? I

think we're not too far from that.

>> Wow.

>> Well, that was definitely the backstory, too, when we were at Microsoft uh last week with Mustafa Solom. I don't Is that podcast out yet? I'm not I'm not sure what the order of coming out shortly.

>> Yeah. Well, look forward to that one because what Alex just said uh is clearly in the minds of Microsoft.

They're going to do everything and anything they can to get on this chart that we're showing right now and they have a lot of assets that that'll come up in that pod that that'll give them a really good chance of getting there. But

it's for exactly the reason Alex said the the OS you the whole base of Microsoft the revenue driver for the last 30 years is at risk now and you you got to move to the new thing or

>> it's not just it's not just OS, right?

It's the entire app ecosystem. Um I mean the the end goal here is for these hyperscalers to capture the user as the only AI you need to use. So so-called

core subscription and that that certainly is OpenAI's stated strategy to become the default core subscription quote unquote for consumers. Anthropics

strategy apparently is to focus on enterprise APIs and codegen. XAI

focusing on brute force scaling and maybe benchmaxing and Google focusing maybe in a more balanced way on total stack domination, balanced pre-training, post- trainining. So I I I think in a

post- trainining. So I I I think in a real horse race, which is what we're finding ourselves in among the the top four frontier labs, we're starting to see differentiated strategies coming to market. Every week, my team and I study

market. Every week, my team and I study the top 10 technology metat trends that will transform industries over the decade ahead. I cover trends ranging

decade ahead. I cover trends ranging from humanoid robotics, AGI and quantum computing to transport, energy, longevity, and more. There's no fluff, only the most important stuff that

matters that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to

discover the most important meta trends 10 years before anyone else, this report's for you. Readers include

founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want

tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to

it. To subscribe for free, go to dmmandis.com/metrends

dmmandis.com/metrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. All right, let's jump into the core story here today. OpenAI

releases GPT 5.2. We spun up this pod for our subscribers the day after the release so we can go into detail. What

does this mean? You know, we heard OpenAI's red alert. Uh, and here's the result.

>> Alex, take it away.

>> Yeah, I've been waiting for this all day. Alex,

day. Alex, >> Dave, you want to lead us or or Alex here?

>> Oh, no. I I just want to say that the these numbers when they go from 80 to 90, uh, it really understates the impact uh, on what you can do. you know the the

benchmark and when it goes from 10 to 40 it looks like a big gain on a line chart but when it goes from 80 to 90 it doesn't look like a big gain but what you can do like firsthand is just

mind-blowingly different and I'll I'll tell you some of the things I've done in a minute but I've been waiting all day to hear actually Alexart who are listening versus uh versus

watching here's a chart of the benchmarks uh comparing GPT 5.1 thinking against GPT 5.2 to thinking uh and with that if you don't mind uh sort of

speaking the percentages as well Alex as we're going through this that would be great.

>> Okay sure so maybe some highle comments and then we can do a detailed playbyplay. So highle comments. One,

playbyplay. So highle comments. One,

keep in mind what I said a couple of minutes ago. They're really if if you're

minutes ago. They're really if if you're open AI and you need to rush an impressive model release to market, they're probably only three knobs you have. One, you can turn up the compute.

have. One, you can turn up the compute.

Two, you can play safety games. And

three, you can do post-training on particular eval particular benchmarks.

So that that story, maybe not the safety story, but the other two knobs, I I suspect is what we're seeing here. So

walking through this chart benchmark by benchmark we have Sweepbench Pro which is software engineering benchmark. We

see a modest improvement between 5.2 and 5.1 perhaps attributable to mostly compute and a little bit of more post-training andor distillation. We

have Google proof question answering diamond modest increase from 88.1% with GPT 5.1 to 92.4 for again so far pretty

modest. We have uh charive reasoning a

modest. We have uh charive reasoning a larger increase. This is uh scientific

larger increase. This is uh scientific reasoning could be post-training not a benchmark that I pay super close attention to. Then we get to frontier

attention to. Then we get to frontier math uh frontier math tiers 1 through three which are easier math problems. And then one of my favorite benchmarks

of all time, Frontier Math Tier 4, which is research grade problems in math that are supposed to take professional mathematicians several weeks to accomplish. I often point to Frontier

accomplish. I often point to Frontier Math Tier 4 and progress on Frontier Math Tier 4 as indicative that uh drink

math is being solved. So, so focus focusing on Frontier Math Tier 4, we see Gemini 3 Pro getting approximately 19%

and GPT 5.2 thinking getting 14.6% and GPT 5.1 thinking getting 12.5%. This

is actually a win in in my mind. This is

a win for Google and a loss for Open AAI that OpenAI has had a month to to attempt to to supercale to to beat

Google in this horse race at hard open or rather hard closed math challenges but professional mathematician grade nonetheless couldn't beat Gemini 3 Pro

and it's it's not as if these problems have been a state secret. In fact,

OpenAI actually sponsored Epic's creation of the Frontier Math benchmark.

So, OpenAI has had in some sense privileged access to all of Frontier Math. Still couldn't beat Gemini. So, I

Math. Still couldn't beat Gemini. So, I

I think that's pretty instructive.

Moving down the list, Amy uh the American Invitational Math Exam 2025 scoring now 100% 5.2 versus 94%

suggestive of post-training. Then we get to the the second set of benchmarks that I think are super interesting. ARC AGI 1 and two. ARC being autonomous research

and two. ARC being autonomous research challenge and of course AGI being AGI.

So for for those who don't pay super close attention to ARGI, ARGI sort of a visual reasoning challenge testing whether problems that

humans find relatively easy sort of a visual problem solving/program synthesis challenge but machines historically have found exceptionally exceptionally

difficult um as sort of an arbitrage between human minds and machine minds.

We see here some big big differences. Uh

so for ARKGI1 the first version of the prize we see that's just saturating at this point 72.8%

with GPT 5.1 86.2% with GPT 5.2 ARGI1 is cooked at at this point. ARGI 2 is nearing the point of saturation. So so

huge change from 17.6% 6% with GPT 5.1 to plus 50% 50 plus percent 52.9%

with GPT 5.2 thinking. So in my mind this this smacks of post training that's the obvious strategy.

>> Take a moment and just for those who don't know what post training is because I think it's an important uh one of the three knobs that you you spoke about and it's important for folks to understand what that means.

>> Sure. So le let's reason by analogy to the way humans uh in in sort of a conventional western upbringing learn.

So you have the the sort of the the the baby the infantlike learning that that's approximately pre-training. So the the

approximately pre-training. So the the the P in GPT stands for pre-trained.

Pre-training is unsupervised training.

you're you're feeding a model just information about the world and giving it the goal of predicting what comes next. There there's not much of a

next. There there's not much of a supervision angle to it and not unlike a a human newborn where it's just taking in information via lots of sensory feeds

and trying to make sense with with very little guidance. Then there's

little guidance. Then there's mid-training and post-training. So think

of these phases of training um as being not unlike attending primary school, secondary school where you you receive explicit supervision. You're receiving

explicit supervision. You're receiving grading. You're being given particular

grading. You're being given particular assignments. Uh and there are many ways

assignments. Uh and there are many ways that you could be graded. You could be graded very granularly like a thumbs up, thumbs down, grade A, B, C, D, F. And

and there are other ways that you can grade. for example, you can be given

grade. for example, you can be given more of an open-ended assignment and graded on how well the ultimate final product of that open assignment is. So

this sort of mid-training post-training which really became popular with the Oclass series of reasoning models from OpenAI and everyone has since adopted reasoning models and and post-training

not just to make humans happy which is another form of post-raining like pleasing your teacher but also showing that you can via reinforcement learning via other mechanisms solve hard problems and reason about hard problems. This is

where post-raining shines. This is where almost all of the the alpha if you will in increasing model capabilities over the past year or so has come from not from pre-training. So getting back to

from pre-training. So getting back to the benchmarks RKGI1 RKGI 2 these are benchmarks the the R in RKGI is reasoning these are benchmarks designed

to test the reasoning capabilities of models and we see a huge jump we see frontier level performance state-of-the-art performance by GPT 5.2

to with ARC AGI2 reasoning is is well on its way to having been solved at this point and I I think we'll we'll cover this probably in the next slide but the costs are

collapsing as well maybe talk about that in a minute just to wrap up then uh for purposes of of narrating this chart the final benchmark here which is perhaps

the most interesting of all is GDP val so GDP val gross domestic product eval was created by OpenAI with the idea of

having an eval that measures AI ability to automate knowledge work in the general human service economy. So we're

seeing a jump from GPT 5.1 at 38.8% GPT 5.2 is now at 70.9%. This is the clearest indicator in my mind that the

human knowledge work economy is cooked.

You you heard it here. It is it's cooked. It's this is 44 different

cooked. It's this is 44 different occupations that uh that OpenAI and by the way this is like all open source.

You can go on GitHub and you can read all of the the tasks for GDP val 44 different human occupations 1320 specialized tasks like creating PowerPoint presentations or Excel

spreadsheets sort of typical prototypical knowledge work >> it's cooked it it's automated and 5.2 to probably again due to elaborate post

train post training can get almost 71% of these tasks. That's that's 70 the the what does that actually means? 71% of

comparisons between a human performing this knowledge work and the machine 5.2 performing knowledge work resulted in the machine doing a better job. And that

was by the way at more than 11 times the speed of the human and at less than 1% of the cost of the human professional.

So, knowledge work is cooked.

>> Okay.

>> You know, I figured I figured something out on that last line this week, too. Um

because, you know, I'm I'm you know, chairman of about a dozen companies and I'm like, "Guys, what what is holding you back? Why have you not deployed

you back? Why have you not deployed this? You can cut costs dramatically.

this? You can cut costs dramatically.

You can automate. You can expand your market share." And they're all like,

market share." And they're all like, "Yeah, I don't know. We're really

struggling." Like, oh, it's driving me nuts. What's going on? So, a couple

nuts. What's going on? So, a couple things that that I finally figured out.

One of them is uh you know one of the companies is is working entirely in Java. And when you turn this loose in

Java. And when you turn this loose in Python where it had a lot more training data, it can build virtually anything.

It just blows your mind and it really sucks in C still. And I don't think they're going to fix it because they just don't care. You know, we've moved off of C anyway and there's there's not enough training data and Java's somewhere right in the middle. And so

when they benchmark it, they're like, well, let me try and take my legacy thing and see if it can just immediately fix it. And it struggles. But if you

fix it. And it struggles. But if you just say no, scrap it. Rebuild it

entirely from scratch in Python. You

come back an hour later and it's done.

>> So they they're stuck there. And also

the other place they're stuck is in operations.

>> They're saying, "Well, look, the way we pick up a customer service request is in an email that's in an Outlook folder and that has all these security whatevers on

top of it. So it's struggling to open and read the emails." And so so we're giving up. like do don't you think you

giving up. like do don't you think you could maybe fix that front-end interface in maybe a day and then try it on the rest of the process and just turn it loose and it would immediately crush the

problem. So they're they're stuck on

problem. So they're they're stuck on these little edge case issues. And and

I'll tell you it also comes up, you know, that RKGI benchmark is the one that was specifically designed to be things that a human finds relatively easy and intuitive and the AI is still

struggling with the AGI one and had countless conversations around, you know, academia with people who are desperately want to say there's still something missing. There's something

something missing. There's something fundamentally missing in this great AI brain and it hasn't been solved yet. and

the proof is ARK AGI1 and you're like okay boy do you look foolish now just two two you know three weeks later five weeks later because it's going to it's it's basically saturated but it's going to be

completely saturated imminently >> and on the GDP val you know if you remember Elon has spoken about one of the companies he's going to be starting

is macro hard and his mission is basically go into a company and simulate all of your employees and deliver it as

a service back to that company. Uh a lot of change is coming rapidly. I think the the biggest challenge is people are not projecting properly on how rapidly this

is going to tip. Uh our next slide here is GPT 5.2 ARC AGI update. Uh we spoke about the numbers in the table just

recently. Here we see it charted out

recently. Here we see it charted out where GPT has re has had a 390fold efficiency improvement over uh over 03

back from 2024. Anything you want to add to this uh AWG?

>> Yeah. So, we talk on the pod, we've spoken several times about hypothetically 40 times 40x year-over-year hyperdelation. We're

year-over-year hyperdelation. We're seeing 390x year-over-year hyperdelation on visual reasoning for ARC AGI. This is

unprecedented. And it will not this level of hyperdeflation in terms of the cost of intelligence will not stay contained to the data centers. It will

not stay contained to to these still relatively narrow. I I know they they

relatively narrow. I I know they they brand themselves as generally intelligent benchmarks, but they're still relatively narrow in the scheme of things. It's not going to stay

things. It's not going to stay contained. hyperdelation is going to

contained. hyperdelation is going to spread outward from from these sorts of benchmarks to the rest of the economy.

That that's comment one. Comment two,

just focusing narrowly on ARC AGI. One

of the the lovely things about the ARC AGI 1 and two benchmarks is they don't just focus on raw performance. They also

focus on cost. And if if it costs us hundred trillion to solve a hard problem, well, if it's if it's larger than the human economy to solve an important problem, then it almost doesn't matter. But if it's incredibly

doesn't matter. But if it's incredibly affordable, you know, to to to your mantra, Peter, about abundance, if if abundance is unaffordable, what's the point? It has to be affordable

point? It has to be affordable abundance. And and the way we get there

abundance. And and the way we get there is exactly what the ARGI organizers do, which is you measure on a scatter plot, performance on the vertical axis, and cost per task on the horizontal axis.

And that shows you what progress looks like. You want progress that looks like

like. You want progress that looks like points in the scatter plot going up and to the left. Greater performance at lower cost. And and in fact, if if going

lower cost. And and in fact, if if going back to my earlier comments, if you see a Frontier Lab hypothetically just increasing compute costs but not actually making efficiency gains, that

shows up in in these plots too. So you

can see for example if you look at RKGI1 uh although it's probably a little bit difficult to read here if you squint you can see that GPT 5.2 2 is on sort of the

same the same extrapolated slope as GPT5 mini suggesting that maybe at least as it pertains to ARGI1 there hasn't

actually been major progress algorithmic or efficiency progress it's just like more compute being spent on on the same tasks and so it feels smarter but it's actually because you're putting more

work into it as as the apherism goes you're you're lifting with your back not with your legs but with ARGI 2 there is in fact radical improvement. So we're

we're seeing progress.

>> Well, this is a benchmark that I think a lot of people can relate to. The next

one here, GPT 5.2 writing benchmark comparison, long form creative writing and emotional intelligence. Uh again, we're seeing

intelligence. Uh again, we're seeing improvements across the board. Uh Alex,

one more interpretation here.

>> Spiky. This is very spiky. So spiky. So

we saw that we saw that sky that sort of interesting three-dimensional plot on uh when are we going to reach AGI and again spikiness was was the descriptor for it.

>> That's right. That that that that spider plot was purportedly comparing humans with AGIS or strong models in in general. What we're starting to see here

general. What we're starting to see here is increased spikiness and spiky competition between the different frontier models. So just a little bit of

frontier models. So just a little bit of context, long- form creative writing benchmark evaluates model's ability to basically write a novella uh about

8,000word novella and as judged by sonnet 5. And the the emotional

sonnet 5. And the the emotional intelligence judge mark benchmark measures how well a language model or a model can grade short function. And so

what we're seeing here is no single model dominating all the benchmarks.

We're seeing, for example, that with long form creative writing, anthropics on a 4.5 wins and is is the best job at writing an 8,000word novella.

>> What do you use? What do you guys use for for writing? I mean, I've been using uh, you know, Gemini 3 Pro. Uh, it looks like Claude, you know, Sonet 4.5 is the

one to go to. Um, are they all >> I've been using? I've been using Gemini 3 Pro and I found it to be really amazing to just craft, but I'm using mostly business documents. So, that's a

little different.

>> Same for me. I I use 3 Pro for almost all of my writing.

>> Yeah, I'm using Kimmy K2 for huge volumes of stuff on my little fleet of of Nvidia chips that I hijacked. Um, and

then uh but I'm I'm using actually Gemini to to one despyware it and uh and to proofread it. Uh and I'm using uh

Claude Opus at extra, you know, my my uh opus expenses went from 200 bucks a month to a,000 bucks a month to I'll easily crack 20 or 30k this month, but

I'll also generate more code this month than my entire life up to this date. Um,

so it's a bargain at 20k, but my expenses are going through the roof on uh on anthropic, and I'm I'm happy with it actually.

>> Um, >> spyware. What's what's despyware? It

>> spyware. What's what's despyware? It

mean >> Well, Alex warned me that when you use a Chinese open source model, it can inject evil things into the code that it returns to you.

>> This is actually publicly information.

We're not breaking news here. Maybe just

to expand on this. Uh, so so two two comments. One comment is uh there there

comments. One comment is uh there there have been very well publicized outside of the pod studies that found for example prompting certain openweight

models with certain politically sensitive for certain countries topics results in those models emitting more vulnerable code for example that's something to be wary of so I would say

like more broadly for creative writing etc like none of these models is so strong that I can ask them to do a good job doing uh all writing. What I find

inevitably is I end up like having to do 80% of the work and models function as more of like a a junior editor as as it were and I end up still doing like

majority of of work writing. Similarly

with to Dave's point with with codegen, I I would certainly not trust codegen models to not insert vulnerable code. So

>> yeah. Well, when you told me that a week ago, I was like, you know, Alex, I'm just I'm going to see the code and, you know, I'll I'll see if it's injecting anything evil in there. I'm not super worried about it. Let's go. So, here we

are a week later and it's generating volumes that no human being could ever look at. I'm like, "Oh I was I was

look at. I'm like, "Oh I was I was completely wrong." Um, and it worked.

completely wrong." Um, and it worked.

The code just flat out works. I don't

even have to look at it. It's passing

every eval. It's doing it's building interfaces that I want. It's doing

everything I wanted to do without needing to look at it. So now I've got actually GPT 5.2 proofreading right now, but I think what I need to do is just turn off Kimmy, pay the 10x higher

price. It's actually 20x higher price to

price. It's actually 20x higher price to to run it on GPT2 uh instead. Um 5.2

instead.

>> Yeah. And but I'm going to have to do that because I don't I don't know how else to make sure I don't end up spyw wearing my entire world. You know, it's >> this is a real challenge. If if you have basically intelligence being dumped in

into the world, then there is this implicit trade-off between do you want intelligence cheap or do you want it to be safe?

>> Yeah. I mean, and we've talked about this as a potential strategy for China making open- source models available to the world, it becomes if it becomes the

base on which you've built everything, uh, then it's it's there from the beginning. Um, I don't want to impute a

beginning. Um, I don't want to impute a a a dystopian point of view on all the the Chinese model makers, but it is a concern. I

concern. I >> I think we're going to see a move to sovereign intelligence. I I I think that

sovereign intelligence. I I I think that this is the long-term trajectory we find ourselves on. Every sovereign entity is

ourselves on. Every sovereign entity is going to want their own sovereign trusted stack.

>> Well, how do you feel about France? So,

Mistl's uh uh Devstral 2 raises the bar in open source coding tools. Uh, so, uh, what do you think about about, uh, about Mistl, Dave? Are you playing with him at

Mistl, Dave? Are you playing with him at all?

>> You know, it's funny. I saw this chart and I had kind of forgotten all about him and, uh, I guess my read on the chart was, "Oh, it exists."

But, you know, the headline says it raises the bar, but it's actually below the I mean, only a notch, but it's below Kimmy and Deepseig. I guess you could probably trust it more because Europe is

much very trustworthy. Um but other than that it was like what's the news here?

>> It's the headlines Europe slow but trustworthy. Okay. And and also it's not

trustworthy. Okay. And and also it's not I mean so so there there's I think this this sense for a variety of reasons that Mistral is somehow like the the EU's

sovereign AI stack or sovereign AI model. But it its roots are all very

model. But it its roots are all very much American. Uh all of its early

much American. Uh all of its early funding is is from blue chip American VCs. Its founding team came from deep

VCs. Its founding team came from deep mind and meta. it. Yes, it's like raised a large amount of money from ASML most recently and I I my understanding is

Europe is very interested in using Mistl as sort of an AI emissary to the rest of the world but it its technical roots are deep deep in the US and sort of this bizarre world that we find ourselves in

where it's a Parisbased Frontier Lab or Neolab however they brand themselves that's the the right now the only and main counterweight to Chinese openweight models. There's one thing I thought was

models. There's one thing I thought was really interesting here. If as it's getting close once when you once you have open- source systems being beating closed systems then you move innovation

to the community level from the lab from the lab and there's no catching up with it once you get that flywheel going. So

I thought this was a big deal once they may need a little bit more improvement per Dave's point but I think once they get there >> is true is that true for AI open source models?

Um, I know it's true for uh, you know, for a multitude of of fundamental just plain software models. We've seen that before. Alex, do you think

before. Alex, do you think >> it's tricky? It's tricky because like you have to ask what are the primary limiting factors to increasing capabilities and it's it's compute more

than talent. There's lots of talent in

than talent. There's lots of talent in the world, but compute is still pretty scarce. So the the community has lots of

scarce. So the the community has lots of talent, but I I in my mind >> they don't have comput. They're comput

starved. This isn't like Linux where you can sort of say lots of eyeballs make all bugs shallow. In this case, the the way you make the bug shallow is by investing trillions in capex.

>> Well, this conversation is critically important. And Alex, you can you can

important. And Alex, you can you can help the world a lot because every corporate executive in 2026 is going to need to choose something. And you know, there's only two types of exec out there. People that are familiar with

there. People that are familiar with this and they've already kind of got their their landscape figured out. And

then the other 99% that are going to get slapped in the face in 2026 and have to react and they're late to the party. But

you saw the benchmark earlier.

Everything every one of your employees can do can now be done by AI. What are

you going to do? Just sit there and ignore that? So in 2026 is the turning

ignore that? So in 2026 is the turning point. But these choices are really

point. But these choices are really tough on this chart. Like to an executive saying, "Well, god, I can go open source at 120th the price, but I

get 72.2 2 ambiguous units of thing or for 77.9 like what does that mean? It means a lot. Anyone looking at the chart would

lot. Anyone looking at the chart would say oh what's the big deal? It's only

five units. But the reality is the capability difference in terms of you know your economic value is massively massively bigger as this goes up even a little bit. And so it's a tricky tricky

little bit. And so it's a tricky tricky situation in 2026 for pretty much all of corporate America corporate world. I I

think it's probably I mean, if I had to to spitball this one, I think it's going to take some sort of regulation to to move the dial on this. Right now, if if you hang out with with all the Silicon

Valley firms that are using openweight models, they're just all using Alibaba's Quen at this point. And Mistrol and Devstral, that that's great, but it's uh

it's probably in the mind of a typical Silicon Valley firm that needs to host their own models, too little too late.

They're all using Quen. They're all

fine-tuning Quinn and it's going to take an executive order or an act of Congress or or some sort of regulatory measure to turn off the the cheap Chinese openweight intelligence before they're

incentivized to to move over to Mistl or Devstrol or or GPTOSS. But Dave, I think one of the points that you made is the CEO and the board of directors of a

company in uh extremis in in sort of paralysis not knowing what to do, >> right? And and their lunch is going to

>> right? And and their lunch is going to be eaten by the small startup that says, "Oh, there's an interesting uh business, so we should go and enter." and it it

builds a AI native approach uh that 100th the cost and you know 10x the innovation uh evolution speed and so

what do they do um you know who do they turn to to help them reorganize their company and it's a risky move because if you brought in an outside consulting firm right I don't think it's going to

be the biggest the big consultants I mean they're going to be AI native companies uh out there we're going to be having a a pod convers sation with one company called Invisible that does this

very shortly and there there are others.

Um the right way to do it, you said it earlier, is to scrap what you've been doing and actually start with a fresh

stack and that is so hard for any any company to do. See?

>> Yeah, this is uh right in our wheelhouse. Uh essentially we're working

wheelhouse. Uh essentially we're working with some very big companies and Dave, you're exactly right. They're totally

paralyzed. They're flailing. They have

no idea what to do. And if they bring in one of the traditional consulting firms, they just push them faster down the old path, right? And so that doesn't work at

path, right? And so that doesn't work at all. And so what needs to happen is they

all. And so what needs to happen is they need to take their capability here, create a new stack on the edge that's completely built AI native from the ground up, and then little by little

deprecate the old and move functionality, capability, resources to the new. the political and the uh

the new. the political and the uh emotional uh stress of that is causing them most of them to do nothing.

>> Yeah.

>> And so out of the say 20 major companies we're working with uh maybe three are doing um maybe 50% of the right thing

and and most of them are just like uh we're going to keep pushing this old model and seeing where where we get to.

Surely we can catch up because we've always been able to get there before.

And the answer is you absolutely cannot.

And so this >> Macy's, it's Blockbusters. And when you say we, you mean Openex EXO is doing some work with these companies out there.

>> Yeah, we have like 42,000 people talking to companies around the world. And so we were kind of a aggregating the gathering the information of all that9.

>> I think 2026 is going to see the biggest collapse of the corporate world in the history of business.

>> You've heard that first prediction here.

No doubt because I think this is going to be and we should have maybe a end of year perspective and some predictions.

But >> for all of the madness we've seen in 2025, it's like this is the slowest it's ever going to be in 2026 going to be 10x

to 50x to 100x crazier. So I I don't even know where to start.

And I've got benchmark fatigue right now >> to dealing with all this. If you hire Seem to help you with your strategy, one of the things he'll tell you is read Klay Christensen, The Innovator's

Dilemma, which exactly addresses this question. And what that book will tell

question. And what that book will tell you to do and Klay Christensen's foundation will tell you to do, go find Link Studio, Y Combinator, Neo, go out

there and find your AI development partners.

try and do a deal with them where you either invest in them or you become a development partner customer for them.

Pull them in, give them revenue because their market cap will go way up. They'll

all become wealthy, but they'll then hire the talent. But point them at your internal problem and have them solve it inside your organization as an outside very tightly bounded startup company

that's growing like crazy. That's the

only way you're going to get the talent focused on your internal problems. You can't hire the talent directly anymore.

You got billion dollar signing bonuses.

Yeah. you know, all over the place.

>> And by the way, Sem will tell you not will tell you to go read open exo2 exponential organizations too, which is our book, which actually walks through step by step what to do,

>> how to do this. Yeah. I actually had a couple of really interesting conversations with Clay uh before he passed away. And one of the things he uh

passed away. And one of the things he uh honestly very honestly admitted was the innovator's dilemma works really well to identifying the the uh cracks in the

structure but it's not that great at the prescriptive side or trying to predict for example in his model Uber is not very disruptive. And I said well Uber

very disruptive. And I said well Uber and he said I said but Uber is very disruptive. It fits right into the

disruptive. It fits right into the wheelhouse of our exo thing. And he goes yeah it means our model's wrong. And the

the when we drilled into it, what we realized was his the innovator's dilemma assumes that the verticals like transportation energy healthcare education stay in those verticals. So

Uber as a transportation company may disrupt a little bit of transportation, but not realizing it, it's also disrupting healthcare delivery and restaurant delivery and food delivery and can go horizontal across a lot of

these. And so there's a the the old

these. And so there's a the the old verticals are essentially collapsing of the old uh newspaper with the printed places saying utilities and this and this and this. And to Alex's point, it's

all going to become one category called compute. And that's where that

compute. And that's where that >> well if you don't want to do what Selma is suggesting, the other choice is to do a $20 billion aqua hire plus 14 billion of new payroll. And and that's the other way to solve the problem.

Or I tell you the other thing I'm see the other thing I'm seeing that's unbelievable executives at that level are are go looking at every looking at the world and going yeah I'm just going to retire right now and so there's this

unbelievable >> stop opting out exactly like falling off the cliff going >> it's the most fun time in human history how can you not not diving into the ground

I actually respect that I tell you why what they're doing is they're basically saying I can't navigate this new world I'm a and let the younger generation navigate

this because I can't do it.

>> But it's very honest, right? At least

the worst thing in the world is the old fddy duddies that are running the world on the old model that can't that won't get out of the way and we're seeing that much more in politics to some extent in the corporate world. This massive change

happening.

>> So talk about talk talk about billion dollar salaries, talk about innovators dilemma. Our next story here is Meta's

dilemma. Our next story here is Meta's shifting AI strategy is causing internal confusion. So Meta is at an inflection

confusion. So Meta is at an inflection point right after mixed Llama for results and a reported 14 billion AI

talent spending spree. Uh you know Mark is looking at considering whether an open-source strategy can still compete with closed vertically integrated rivals

like OpenAI and Google.

Dave, what do you think about this?

I I think they're doing exactly the right thing. Actually, the the other

right thing. Actually, the the other backstory here, which I guess is validated, maybe it's more rumor than validated, but they're they're getting heavily into distillation of other people's models to accelerate the

inference time speed. And what's

exciting about that is if you look at where we are in human history, you know, intelligence in a box was invented just days ago, you know, or well, really two years ago, but but it's brand new in the

world. And now we're in the

world. And now we're in the hyperexperimentation phase of how do we make it bigger better by by running many many agents in parallel by expanding the

context window and dumping in tons more data um and by iterating it over and you know chain of thought reasoning it over and over and over again and we're getting ridiculous gains but we're brand

new in that game and so what Meta has realized is look we're behind in the foundation model race we do need to rebuild and catch up but that's not going to happen overnight but where we can potentially get ahead is by raw

inference time speed and having many many more agents working on things in parallel. And I believe that that will

parallel. And I believe that that will also lead to self-improvement which will get them back on the map. And so I think that they're directing all their research energy now into how do we make this

>> blazing fast and be the world leader in distillation.

That's my >> incredible. I'm blown away by the 14

>> incredible. I'm blown away by the 14 billion dollar hiring spree. Just like

that number. I can't I can't process that number.

>> Well, remember they've got just a massive cash cow and cash flow and and cash uh generating engine. And you know, Mark has basically said this is the race

if we don't spend the money now uh to get, you know, towards number one. It's

it will just slowly slowly go away. So

Dave, what you're saying >> and what's cooler than cool is that he's already decided to use every single penny of it plus debt on top of that to try and win this race. And Wall Street has said that's fine. No damage to the

stock. Go for it. We love We love what

stock. Go for it. We love We love what you're saying. That's just a beautiful

you're saying. That's just a beautiful thing.

>> So what you're saying is they're moving from like trying to focus on the open source of the foundation model to putting all of their chips on the agent strategy.

>> Well, so much innovation there, too.

>> Yeah, I think they're in in a bit of a tricky situation. So, so I I I I know

tricky situation. So, so I I I I know the key players. Zuck's undergrad

adviser before he dropped out was my post-docctoral adviser. Natt Freriedman

post-docctoral adviser. Natt Freriedman who's with Alexander Wang helping to to lead this this new lab was my first roommate at MIT. I I'm pretty familiar with the key players in this particular

story. And I I think there are three

story. And I I think there are three strategies that Meta could be pursuing andor has been pursuing. So one strategy

is that of commodify your compliment drive the the cost of generative AI to zero. That that was their llama strategy

zero. That that was their llama strategy that they were pursuing. Problem is

llama 4 was a disaster and the Chinese openweight models are flooding the market and doing a much better job. The

second strategy that they could be pursuing is more conventionally and and perhaps what Wall Street would expect out of Meta, use strong AI to improve

like Instagram uh and and other Meta products. And so uh that I would have to

products. And so uh that I would have to imagine many executives at Meta would like to see all of these new AI resources being used to just improve

Meta's other existing products. Strategy

two. Strategy three is compete directly with the frontier labs with closed source API based models to be the first to super intelligence. So I think what

Meta has to struggle with it's almost hopefully not like a civil war internally but what they have to decide is which of those three three strategies do they really want to pursue and my

guess is there are constituencies with different interests within meta that want to pursue each one of those three.

>> I cannot believe Mark is not all in on number three. I mean being first to

number three. I mean being first to super intelligence that just feels like Mark's MMO.

>> Yeah.

>> Yeah.

>> And >> Yeah. And I think I think very often the

>> Yeah. And I think I think very often the cover story is look we're going to enhance existing products. We're going

to use our internal data. You know we've got a huge amount of internal posts that we can use as training data. That's all

kind of cover story for the real we want to win the race to AGI and ASI. By the

way, everybody, I want you to realize as you're hearing these stories about Google, about Meta, it's all about business model innovation. Uh, on top of

all of this, right, Google going from an adbased search company to now an AI based company that's delivering a whole slew of different products. Meta is I

mean this is where companies fail when when Blockbuster did not change their business model even though they had twice the opportunity to buy Netflix, right? So how do you actually disrupt

right? So how do you actually disrupt your own company and and shift its business model? Um otherwise it's it's

business model? Um otherwise it's it's game over.

>> Innovator's dilemma to Dave's point earlier.

>> Yeah. But it is I I I think also ironic like Sam Alman has has said publicly that he'd much rather have a billion users with not Frontier model than vice

versa. And yet and yet what we see from

versa. And yet and yet what we see from Meta is the exact opposite strategy.

Meta already has their billion users billion plus users but they would much rather have a frontier model at this point than no one's ever >> that grass is always greener at the

other frontier lab. That's funny.

That's a that's a that's a good phrase.

All right. Our next story here is Google DeepMind to build material science lab after signing deal with UK. So we've

heard about this as well. Another

company um out of MIT and Harvard called uh Laya uh is doing something very similar where you're basically you know it's all about the data and if you've

consumed all the data you need to go find new data. So imagine having a you know lights out robotic capability where the AI is putting forward a scientific

hypothesis designing experiments and then at night uh robots in the lab are running the experiments to get the data to either confirm or modify your

hypothesis and like let's do that a thousand times or 10,000 times faster than humans can do. Uh it's uh I think we're going to see multiple companies. I

think every frontier lab is going to need to have this kind of data mining.

We're data mining nature, understanding what's going on. In particular here, they're focusing on material sciences.

Uh Laya is looking at biological sciences. Uh thoughts on this gentleman?

sciences. Uh thoughts on this gentleman?

>> I don't know if there's a poly market on this, but Dennis is really leading the race to being the coolest guy on earth.

Um he got his Nobel Prize in chemistry.

now he's going to crack computer and this you you kind of could see this coming because you know AI can allow you to be a worldleading expert in anything and you know he's the master of the

biggest AI you know compute in the world and algorithms and TP he's got he's got the >> and he also isn't uh he's not one of the corporate you know leaders trapped in the political prey and

>> beautiful we're going to have the coolest guy benchmark okay >> well you what's great is you want somebody with that purity at the edge of

this which is fantastic. There's a

couple of things I thought came across for me having kind of hunkered around in physics labs during my degree. The if

you have a fully autonomous lab, this is like the biggest breakthrough in scientific progress since the scientific method was invented because we talked about dark kitchens and dark factories

and now we have dark labs. Holy crap.

>> Yeah. You know, it's funny too. I can

only find like just a handful of people like Demis, Alex on this pod. There's

there's like 10 or 12 that I could name that can tell you the implications, you know, in all these other, you know, in in biotech, in material science, in chemistry, in math. You know, Alex is talking about solving all math. It's

just such a small group of people who see where this is going to take us and how short that timeline is. So, it's

good to see Dennis doing material science.

>> This is AI assisted science and AI native discovery. Alex, you want to

native discovery. Alex, you want to close us out on the subject? Th this is what comes after super intelligence.

What comes after super intelligence is solving math, drink, science, comma, engineering, comma, and medicine. And

yes, math is being solved. We we've

spoken about that perhaps ad at nauseium at this point on the pod. We haven't

spoken as much about AI solving all of material science. And there are like a

material science. And there are like a dozen companies. It's not just Google.

dozen companies. It's not just Google.

It's not just Laya. It's not just periodic. that there are a dozen

periodic. that there are a dozen companies that are all laser focused on solving material science and that's going to give us so many upsides. It's

also when we talk about recursive self-improvement having better semiconductors having better superconductors for science is at the foundation upon which everything else is

built.

>> The medium here we come >> and the innermost loop accelerates again.

>> Yeah. And by the way, uh, for our new listeners, our new subscribers, if you hear Alex saying drink, there's been a bingo game sort of invented for, uh, terms that are repeated on a regular

basis. You'll you'll be hearing it. All

basis. You'll you'll be hearing it. All

right, let's move on to our next story here. Um,

here. Um, uh, and I don't know how I feel about this story. I I sort of feel like I

this story. I I sort of feel like I don't want to like overblow over, you know, overexpose what's been already overblown, but uh this is a story of an

AI native character called Tilly Norwood. Uh and she's an AI, you know,

Norwood. Uh and she's an AI, you know, native actress that's freaking out Hollywood. So, Tilly Norwood is an AI

Hollywood. So, Tilly Norwood is an AI made actress created by a London studio uh to star in films and social media. Uh

built over six months with GPT. Uh Tilly

went through 2,000 design versions and YouTube videos have garnered over 700,000 views in October. We're we're

see we saw this also in the music business where fully AI native bands and and music tracks have been created and people don't even realize they're listening to something that's just fully

AI generated.

>> Uh >> she has her own agent.

>> Yeah.

>> And reportedly like 40 different contracts for for movies and other development projects. This is I I would

development projects. This is I I would say like this is consistent with my my modal hypothesis that over the next 10 years we're going to live out the plot of every sci-fi movie ever made. In this

case, this is actually I don't know if you saw the movie Simone. Uh this was the the plot of the sci-fi movie Simone where an an AI actress develops a life

of her own, takes over. It has Alpuccino in it. It's it's a fun movie, but like

in it. It's it's a fun movie, but like we're we're going to see AI actors and and actresses take over potentially, or at least we'll we'll discover how soon humans crave authenticity in their entertainment.

>> There's no doubt in my mind that that humans do not crave authenticity as much as we think we do and we will just watch whatever is interesting and entertaining. And I was at the

entertaining. And I was at the Washington Post when you know every reporter there was saying >> you know the post will be fine because people will want genuine great reporting from great reporters who are struggling

in the field to find the stories. That

was right before Yeah. Guess again.

Gone. Just gone in and in in just a couple years too. The timeline was so much shorter than than they ever would have thought. From from top newspaper in

have thought. From from top newspaper in the world, multigenerational been in the family for three generations to gone.

Jeff Bezos bought it for cents on the dollar in just what, three years, four years. So, that's going to happen here,

years. So, that's going to happen here, too. Uh, and no doubt in my mind, it's

too. Uh, and no doubt in my mind, it's going to happen with music. It's going

to happen with movies. It's going to >> Yeah, it's inevitable.

>> This is This is an AI performer working 24/7, uh, appearing on in unlimited projects, never aging, never burning out, uh, never needing to renegotiate contracts.

I mean, this is the Screen Actors Guild worst nightmare. I had dinner a couple

worst nightmare. I had dinner a couple of nights ago with a dear friend on my ex-prise board who used to be the head of two of the major studios and then uh

an actress uh who's another dear friend and we were talking about this and it it is scaring the daylights out of the

industry uh and the I mean it's it's >> well no good because they'll react and I don't I'm not I I wish nothing but good to happen to the people that are in the industry but good that they're scared because then they'll react as opposed to

getting crushed. I didn't mean to.

getting crushed. I didn't mean to.

>> Well, well, the question become the question becomes then what's the response, right? Are you as an actor

response, right? Are you as an actor going to license your persona because that's the way you're going to make money in the final result? Because if you don't then then

result? Because if you don't then then the industry will simply or you know the next generation industry will simply create a Tilly Norwood who actually is cuter than you or more handsome than

you. Uh able to

you. Uh able to >> doesn't age.

>> Doesn't age.

>> Oh yeah, there you go. Doesn't age.

That's a huge one. Um, I'll tell you one thing.

>> I wonder when you'll have one of these winning the Oscar, >> right? Because in theory, in theory,

>> right? Because in theory, in theory, they should be the best.

>> We have a lot of those benchmarks. When

will the first AI win a Nobel Prize, right? When will the first AI, you know,

right? When will the first AI, you know, build a >> billion already did it cuz he's kind of half anyway.

>> That's done.

>> It's squishy. Also, there have been, by my count, at least two Nobel prizes.

There was Demis with with Alpha Fold in and chemistry and then there was also Jeff at all with restricted Boltzman machines for physics. I the the squishy

thing here is it you can always do a secret cyborg as as some would say and wrap AI talent inside a a human meat body and the human claims the credit for

it. So I I it's unclear again like how

it. So I I it's unclear again like how much humans crave authenticity. Does

this become a separate category in the in the Oscars like animation? Is this

sort of an an increment on top of animation that's real life animation or is this an actual labor substitute? I

don't know yet.

>> I think a lot of that thinking though a lot of that thinking is is a little bit misguided in that what what the actors will be looking for is a featurelength movie in a theater where it's all AI and that's what they're going to use as

their bellweather for the threat. But

that's not what's going to happen. And

if you look in the data, short form video is taking over the movies anyway.

And video games are already miles ahead of movies.

>> We had these conversations. Kids don't

go to the movies. They watch YouTube videos. It's all

videos. It's all >> Exactly. So Tilly Tilly will end up

>> Exactly. So Tilly Tilly will end up being a star in every video game and also Tik Tok clip >> across >> and they'll say, "Well, that's not a threat. That's not a threat to Yeah.

threat. That's not a threat to Yeah.

across platforms." And the actors will say, "Well, that's not a threat to me.

I'm a real actor. I do Shakespeare and you know, whatever." Like, well, no, it is a threat to you because the audience has moved and the budget has moved and that'll undercut you. So they're looking at the wrong bell weather. When when

Tilly shows up in five billion Tik Tok posts, that's when you know you're dead long before it hits you in your long form movies. So you just got to look at

form movies. So you just got to look at the video games, too.

>> A related story of this, which is OpenAI is working with Disney uh to bring Disney characters into Sora 2, >> right? So that

>> right? So that >> Yeah, they just announced that.

>> Yeah, it's a fascinating. So

>> a billion dollar investment and and licensing. I I I I think there's going

licensing. I I I I think there's going to be a certain fungeability between classic IP assets and and generative everything and and so for maybe in the

short to medium-term it's a three-year reportedly licensing agreement that OpenAI and Disney struck. Maybe in the short term the the short-term remedy is

existing actors can license their Visage out as an asset to customers who want to do sort of fan pickics. But they're

really if you're like a really popular star like a Peter Diamandis, you know what's the thing you should do right away?

>> Signed Sign my rights already.

>> Yeah.

>> Get your avatar out there. Get it built and out there right away. Get your Tilly Tilly Norwood equivalent, Peter or whoever out there right away so that personality can grab before you know the

true synthetics take over.

>> Yeah, it really is going to be a race for neurons, right? if if you're looking you're going to you know the general public you know Dunar's number only

really cares about 150 people and holds them close and so the question is are one of those or 10 of those going to be

synthetic actors um and once you get to a point of popularity uh it's going to be hard to replace you >> for for for what it's worth uh at maybe

to tie a bow on this also the Dunar limit of 150 people that that was like in the ancestral environment if if the number is is valid at all in in the post

social media era you can maintain light casual associations with thousands of people and >> but Dunar's number is is basically sort of the human tribe and I've done this

when I was running Singular University when it's it's the number of people you can actually uh remember their names go deep with and so forth sure you can have

a rolodex of 22,000 people but Dunar's number in terms of who you feel connected to closely is is a real number.

>> I I'm with Alex on this one. What I

noticed was once you have Facebook and you could essentially Facebook acted as your RAM for Dunar, you could move people in and out of that spectrum very easily without really noticing. And you

have the opposite effect also where once you kind of start to connect with enough people. Peter, you've probably had this.

people. Peter, you've probably had this.

I remember walking down University Avenue in Palo Alto uh right after one of our exec oneweek executive programs and this guy stops me and he goes hey Sim nice to see you and I'm like have we

have we met he said we I just spent the week in the classroom with you right and I wow like it's like our our brains are to blown up now with the limits of that we need technology to expand that

capability and it's start it's already done that to one extent and we can move things in and out the question is what do we do when we have all these synthetic AI levels going going through that. So,

>> this episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses

thousands of specialized AI agents that think for hours to understand enterprisecale code bases with millions of lines of code. Engineers start every

development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform

requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy

delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete

the sprint. Enterprises are achieving a

the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their

coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit

blitzy.com to schedule a demo and start building with Blitzy today.

Our next story here comes out of the White House. Trump signed an executive

White House. Trump signed an executive order curbing state AI rules. So, this

is a decisive federal power grab uh over AI regulations. Trump's one rule

AI regulations. Trump's one rule executive order is going to preempt state level AI laws. Uh it's like, nope, it's not going to be, you know,

Washington uh Washington DC is going to win over everybody. It's not California laws or Texas laws. It's Washington DC.

I mean ultimately I think this is what the EU needs as well. Um it needs top level direction. It's going to be harder

level direction. It's going to be harder there. Any particular thoughts on on the

there. Any particular thoughts on on the one rule >> here? It's absolutely positively

>> here? It's absolutely positively necessary. I hate I hate it when this

necessary. I hate I hate it when this happens but we got to do it. Um because

you know variety across states is one of our best assets. On the other hand, New York just passed a law that says you can't use the likeness in an AI of somebody who's deceased without going to their ancestors. Like, what what about

their ancestors. Like, what what about all these Einsteins floating around already? Like, how are you going to keep

already? Like, how are you going to keep it out of New York? There's no way to just launch it across the country and then New York users get blocked somehow.

I mean, it's just it's just unworkable.

Uh so >> I'm going to I'm going to claim I'm one of Aristotle's ancestors and you can't use his like I mean how far back >> down Aristotle's >> I had two I had two thoughts when I saw

this one was when I saw one rule I very quickly thought about one ring to rule them all. Um uh and I just love the

them all. Um uh and I just love the politics of this where the a huge amount of the effort for Trump was saying let's push all the thing down to states rights and now we're going totally the opposite

direction and I think it's a necessary thing. I agree with uh Dave here. It has

thing. I agree with uh Dave here. It has

to be done because if we don't get um uniform AI treatment, uh where the hell are we going to get to?

>> I also I mean there's an interstate commerce angle here. Models are being trained in one state and inferenced in in other states. That this in my mind and I I I read the executive order um in

the past 24 hours. The the EO is ensuring a national policy framework for artificial intelligence. it. I I I think

artificial intelligence. it. I I I think this is it's it's both reasonable under the interstate commerce clause and also necessary for for international

competition. It's not at all obvious how

competition. It's not at all obvious how a patchwork of state-based regulations results in anything other than total chaos.

>> I mean, this is this is a piece of the overall White House strategy on energy, on data centers, on chips. uh it's all

aligning everybody to make the US as competitive as possible on the global stage and to accelerate as fast as possible. It is a race to super

possible. It is a race to super intelligence. Um and this is just part

intelligence. Um and this is just part of the uh >> can I make a radical prediction here?

>> Yeah, of course.

>> This is this the over the next 5 years the entire US constitution will evaporate. Uh every clause is starting

evaporate. Uh every clause is starting to just melt away. I look the right to privacy fourth amendment gone right. Um,

we're we're we're going to see the whole thing. It needs to be rewritten from the

thing. It needs to be rewritten from the ground up and it's going to be interesting to see how that happens. And

I will move that.

>> Boom. That's what you need.

>> The instead of the founding fathers, it's the founding models. Um,

>> for for the record, I I don't buy that prediction for one second.

>> Good. We can put some money on it.

>> That's poly markets, baby. All right.

Uh, let's move to a conversation on the economy. Uh and you know this is data

economy. Uh and you know this is data just to support what we already know.

OpenAI finds AI saves workers nearly an hour a day on average. Uh so workers using OpenAI tools have saved between 40 to 60 minutes a day. The survey of 9,000

people in 100 companies found that 75% say AI makes work faster or better. The

biggest time saver over a million businesses today are using open AI tools. I'm going to couple this story

tools. I'm going to couple this story with our next one, which is layoffs uh announced, you know, 2025, we had 1.1 million layoffs, which is the most since

the 2020 pandemic.

All right. U Dave, you want to jump on on this? I was talking to Scott Perry,

on this? I was talking to Scott Perry, the CEO of Tree Lending Tree, a public company yesterday actually, and he said 20,000 incredibly talented people in

Seattle are now cut loose from Microsoft and Amazon, and it's the best hiring opportunity for tech talent he's ever seen in his life. But these are really,

really solid great people that the mega tech companies have just cut out because AI is automating, improving, enhancing.

you know, coding is one of the biggest early beneficiaries and you know, my my top coders are 10 times more productive, so I don't need nearly as many. So,

that's where the layoffs are coming from. But this is just uh you know,

from. But this is just uh you know, we'll look back on this and say, "Wait, what? That was a bell weather. I why did

what? That was a bell weather. I why did I not notice this little thing, you know, and but when you see what happens in 2026, you'll say, "When did this all start?" Well, right now, this is when

start?" Well, right now, this is when it's >> Now, what do you predict for 26, Dave?

>> Continued.

>> Yeah. the the capabilities will be you know able to eliminate on the order of 80 90% of all jobs but then the roll out and the

percolation is dependent on regulation and also corporate bureaucracy and so it's it's tough to predict how quickly people will react. My my guess

is that it'll get a very slow start.

Everybody's very stodgy. Um, but then everyone's a sheep. And when somebody in your industry is an early adopter and their stock goes up 10x just because they're an early adopter, then your board beats you up like crazy and says,

"What's what about us?" And then the sheep effect flips in 2026. So by the end of 2026, everyone's in absolute panic mode and then they're wishing they

started at the beginning of 2026. You

know, I I think there's going to be, this is one of my predictions, I think there's going to be a absolute need for all the medium-siz and large companies

to bring in a reskilling uh consultancy uh some type of a program could be a fully AI based, but uh that provides some kind of a safety net for your

employees that you're going to reskill people before you fire them, and if they aren't able to be reskilled, then they're let I also think that's a huge business opportunity for an entrepreneur out

there to build that kind of capability.

>> Totally. Totally right. In fact, you know, if we look in our portfolio, the companies that are quote unquote floor deployed >> um are killing it. And you know, if you couple that with what we just said,

there's 20,000 highly talented people in Seattle that just got cut loose. If

you're if you're growing your business, a lot of the younger companies uh you know 22 23 year old leaders are afraid to be forward deployed because they've never they've never done it before. They

don't have any management experience.

They don't have any enterprise sales experience.

>> Well, get hire hire those 20,000 people, train them on how to be AI forward deployed consultants or delivery people and then get them embedded back into corporate America at State Street Bank,

at at JP Morgan, at Walmart. they'll

they'll hire your people instantly to get AI deployed inside their organization because they can't get that talent. But if you grab grab those

talent. But if you grab grab those people, retrain them very very quickly on your own AI training platform and then get them redeployed into corporate America, your growth rate, you you'll be sold out every time you have a meeting,

you'll you'll generate a sale.

>> So the founders, the really young founders are afraid to do it. They want

they want to just like launch their software on hacker news and hope that the world sucks it up and it there's just this big gap between there and where corporate America starts and it's it's just never going to fill if you

don't get forward deployed.

>> Um I don't think this is a skills issue.

This is a cultural problem. The problem

is in corporate America with all the structural impediments in a big company, you need a mindset shift at scale company to even adopt this. I think I think the large companies and the

medium-sized companies to be very specific about my prediction here are going to need to hire a very specific kind of consultancy, right? A company

that comes in and their job inside your company and I think every company's going to have a version of this is reskilling. And so that when you go to

reskilling. And so that when you go to work for a company, you know, there's a a reskilling um you know safety net there for you.

Exo. Yeah. Um,

but what I'm saying is it's not just reskilling. It's a mindset shift. Real

reskilling. It's a mindset shift. Real

change. It's a cultural change that has to take place. And that's actually much harder. And I want to say two things.

harder. And I want to say two things.

>> There's cultural and mindset shift at the CEO, at the executive level, and at the employee.

>> All of them.

>> It goes through it goes through the organization. Uh, and we've actually

organization. Uh, and we've actually been working on this for several years now. And I want to tell a quick story.

now. And I want to tell a quick story.

our second ever client when we finished one of our 10-week sprints realized that they had to lay off a thousand people in the company and they decided what are we going to do because we're a family-owned business. We have uh we want to really

business. We have uh we want to really provide for these folks. What do we do?

We actually got them to give them a one-year UBI so that they could find their own passion, find their own work, and if they didn't by the end of the year, they would try and hire them back.

And it was an incredibly successful program. I think we're going to see a

program. I think we're going to see a lot more of that as we kind of transform the workforce.

>> All right, let's get into data centers, chips, and energy. Um, we're seeing data centers begin to pop up in countries around the world. I don't want to spend

too much time on this, but Qar uh you know, QIA, the uh the sovereign fund there is investing 20 billion to launch a data center in Qatar or Qatar or

however you want to pronounce it as a Middle East hub. uh we're seeing Microsoft uh and Sat and Satya just coming back from India meeting with Prime Minister Modi there committing

17.5 billion in India to expand an AI ready cloud uh there in the region so we've got uh I mean this is going to be

the case in all major nations these partnerships taking place the real >> this is Alex's comment about tiling the world with data centers and everyone

>> drink tile the earth with sovereign Inference time compute. Drink drink.

>> Okay. But we're drinking coffee this morning, ladies and gentlemen.

>> Drinking water.

>> Alcohol. All right. So, uh here's here's the story I want to dig into. You know,

in our last pod, we talked about China's uh sort of incredibly expanding role.

So, China is set to limit access to Nvidia's H200 chips despite uh Trump's export approval. So, you know, President Trump

approval. So, you know, President Trump says to Nvidia, "Okay, you can export these." And now the China leadership is,

these." And now the China leadership is, "No, no, no, you can't buy them. You

need to buy Chinese-made uh and you know, GPUs." Uh, fascinating, right?

know, GPUs." Uh, fascinating, right?

It's this is propping up its own chip economy. I think it's a smart move on

economy. I think it's a smart move on China's behalf.

>> This is so fun and annoying at the same time to watch. You know, this is pure protectionism. The US never did it

protectionism. The US never did it before and now we're now we're playing the game. But you know what happens is a

the game. But you know what happens is a country invents something like an LCD TV or a car or you know whatever and another country says okay what we're going to do is we're going to protect the home market. We're going to manufacture our own. Then we're going to

dump it on your market cheaply and we're going to dump it until your companies collapse and the venture capitalists all run away and then we're going to price it up. So what we did is we embargoed

it up. So what we did is we embargoed the chips from China and they're like, "Oh we need to build our own whole supply chain." And as soon as they get

supply chain." And as soon as they get it up and running, we're going to say, "Oh, no, no, it's okay. Now we're going to actually allow you to buy the H200's and that entire thing you just built

makes no economic sense." And so China's saying, "All right, I I see what you're doing here. I've played this game for a

doing here. I've played this game for a long time. We're not going to we're not

long time. We're not going to we're not going to buy them." Like, but why? You

know, it's an incredible buy. Why why

would you not allow us to buy them? cuz

we we already made a massive investment in our own fabs. We're going to have to keep subsidizing that to get this up and running cuz we know what you're doing here. You're going to let us buy them

here. You're going to let us buy them right up until our stuff collapses and then you're going to cut it off again.

>> This is it's a trust issue.

>> Big trust issue.

>> There's no trust at all between the US and China right now.

>> Well, this the same thing happened, right? The Japanese came over during

right? The Japanese came over during Trump's first administration and spent a lot of time negotiating a trade deal and and then in just a few months ago, Trump

um the administration cancelled that trade deal. And the Japanese are like,

trade deal. And the Japanese are like, "We're not negotiating another one because we don't know which way up is anymore." And every single time it

anymore." And every single time it changes completely. So there's no trade

changes completely. So there's no trade deal. And this is really a a big problem

deal. And this is really a a big problem going forward. And I think what China is

going forward. And I think what China is saying is we don't want to play that game.

Well, there's no doubt that the outcome is look, two completely separate ecosystems. You Europe is kind of a wild card. It's interesting and and so is

card. It's interesting and and so is India is kind of a wild card right now, but there's no doubt the US ecosystem is going to grow completely independent of the China ecosystem because there's no chance of reestablishing trust after

that chip embargo.

>> Yeah. There's like no way that that's going to get get mended.

>> That's right. So, sovereign data center AI compute to Alex's point, >> it's it's a new It's almost like a second cold war. It it's it's a a world that we move to where there are spheres

of influence and spheres of fab and spheres of compute and the decoupling happened.

>> Yeah.

>> I f Okay, move on to power generation.

Uh there's a company called Boom. Uh

many years ago, it set out to build the first supersonic uh passenger airliner to replace the Concord. And I was so impressed by the the founder and CEO,

his hutzbah, if you would, to take on this moonshot to build a supersonic consumer airplane. And I was like, I

consumer airplane. And I was like, I don't know how you get there. How much

money is going to be required uh to to build this. So, it's a fascinating backs

build this. So, it's a fascinating backs stop that Boom had been developing, you know, supersonic uh engines and now

they've unveiled a supersonic super power turbine uh that's able to provide 42 megawatts of natural gas turbine capabilities

uh to data centers. Um and so this is, you know, a backstop business model uh for Boom. uh and it's and it's huge,

for Boom. uh and it's and it's huge, right? So, uh this is moving power to

right? So, uh this is moving power to the data centers, right? It's uh it's a gas turbine strategy and we've heard before all the gas turbines have been

sold out for some time. Uh Alex, you want to jump on this?

>> Yeah, I mean the as you were were gesturing, Peter, that the wait times right now for gas fired turbines for AI data centers are seven years in some cases. So I I think this is a brilliant

cases. So I I think this is a brilliant strategic pivot by by boom, it also to the extent referencing comments from a minute ago to the extent we're in almost

a quasi second cold war. This is is almost like a self-directed defense production act type move pivoting resources perhaps from turbines for

supersonic consumer jets to turbines for AI data centers. And of course there there are synergies there, but this is I think it's a brilliant pivot. And the

the irony is there's probably a much much larger addressable market for gas turbines for AI data centers than there is for consumer supersonic jets at this point. I I just hope for the sake of

point. I I just hope for the sake of boom that that they retain at least some semblance of the original supersonic vision and just don't get overwhelmed by the AI data center business.

>> I just love that audio clip. Hey, hey,

behind the scenes, I need that audio clip like right away. That that is >> because you know there's so many companies including Vesmark you know one of the ones I founded preai >> uh you know manages $2 trillion of

assets 20 million lines of code profitable great business and I'm like guys you got to be an AI company like tomorrow >> pivot pivot >> pivot pivot pivot pivot we've got

>> you know so this is a great case study like you you you wouldn't think that a jet engine is company is culturally going to pivot and become a power generation company,

but when you look under the cover, it's like, well, what are our assets here?

Well, we've got the blades, we've got the manufacturing, we've got metal, you know, like that's all it takes. The age

of AI has so much opportunity that didn't exist the day before. And you

don't have to be that close to the center point. You have to be adjacent

center point. You have to be adjacent and just pivot quickly and you and you'll succeed wildly. And so I I hope these guys just crush in fact I know they'll crush it cuz cuz like you said,

Alex, they I I know personally >> data center operators that yeah they they'll spend anything and they're and they're pre- buying too. They'll pay you upfront for something that you're going to make next year

billion dollar backlog. Uh and it's a product they can deliver immediately, right? This is on premise power

right? This is on premise power generation for data centers which is so critical. You know, they've been

critical. You know, they've been working, Boom's been working on this for, I don't know, six, seven, eight years, and they've built the scale model of their supersonic airplane, and they're trying to get advanced orders

from all of the airlines. But to get through the FA thicket is so difficult, decade, that's decades, >> it will kill you. But if you've got a an actual business model delivering revenue

right now, I mean, I I agree with you, Alex. I hope Boom actually delivers on

Alex. I hope Boom actually delivers on their original idea. I think this increases the probability a huge amount.

Right? Then and this is the equivalent of uh of Amazon realizing with uh Amazon web services, it's got something that it can offer uh to everybody else that

makes you know very strong near-term profits.

>> Elon or Elon like delivering Starlink now and Mars colony in 10 years.

>> Yeah, it's >> that's the sexiest looking gas turbine I've ever seen, by the way.

beautiful looking thing.

>> I'm sure after you run it, it gets dirtier.

>> 1.25 billion in backlog. Congratulations

to the team at Boom for that strategic pivot. And everybody else,

pivot. And everybody else, >> everybody learned from this story. Like

we should track this uh you know in a few weeks or a few months.

>> What do you have? What do you what are you building right now that's a cost center for you that could become a profit center for you in the AI ecosystem? That's the question. All

ecosystem? That's the question. All

right. On the energy side, China builds nuclear reactors at $2 per watt versus the US at $15 per watt. Uh, again,

what's going on here? Why is that why is that happening? Alex, do you have a

that happening? Alex, do you have a thought?

>> Yeah. Well, China does have more people than the US. China does have a need for more energy. If if there if AI were not

more energy. If if there if AI were not part of this equation and and China were to attain US per capita energy footprint standards, China would need more energy

than in in a total sense in an absolute sense than the US. That that part makes sense. What doesn't make sense if if you

sense. What doesn't make sense if if you look at the permitting processes required for nuclear energy in the US, it's a very different beast. There are

obviously the the the NRC regulates US nuclear power deployments at the national scale, but then on top of that, you have some states that de facto ban nuclear power entirely. We have a

patchwork of state and local regulations that make it extremely difficult to to deploy nuclear energy. Here in

Cambridge, Massachusetts, many people not may or may not be aware of this.

Cambridge has a nuclear reactor. It's

it's not very well advertised. It's on

Massachusetts AB. on the the MIT campus, but we have a working nuclear reactor and and have had one since I think the the late '60s, early '7s, but that that's very much like not par for the

course in the US. I wouldn't be surprised if sometime in the next 2 to 3 years, we see some equivalent for nuclear energy of of what we just saw with the White House's executive

>> to see it in the next few months. I mean

the bottleneck is not physics, it's permitting and execution and that's got to be cleared.

>> Yeah, >> I'll give you a little uh side story related to this. Um you the MIT brand, here's the MIT brand. The MIT brand is absolutely skyrocketing in this AI revolution. But we found out that that

revolution. But we found out that that MIT nuclear reactor is going to be exothermic and powering the campus. And

I'm like, wow. Because we don't have a single nuclear reactor in the state, you know, we can't get that approved. We buy

our nuclear power from New Hampshire, but MIT can actually get stuff like that done now. Just crazy how how that brand

done now. Just crazy how how that brand has skyrocketed in impact with this AI revolution. All right, want to jump into

revolution. All right, want to jump into robotics. A special uh you know hat

robotics. A special uh you know hat tipping here to Sem. This is Sem's perfect robot. It's got something like

perfect robot. It's got something like 14 different arms on it. See, are you happy with this robot?

>> This looks awesome. Look at all the chickens that can move around very quickly. Um, this this is this is Yeah,

quickly. Um, this this is this is Yeah, I love it. Just love it.

>> For those of you new to the pod, See is having a running debate about, okay, why humanoid robots? Why just two why just

humanoid robots? Why just two why just two arms? Well, Seem, you've got all the

two arms? Well, Seem, you've got all the arms you could possibly put on a body here.

>> I just love all the wires sticking out of it. Also, like it looks

of it. Also, like it looks >> I mean there there is a serious story here too, like in in China there's an image doing doing >> I can't wait for that.

>> Yeah. doing doing the rounds with six arms that there I don't think there's anything like super Yeah.

>> Yeah. I was going to bring that I was going to bring that article forward as well.

>> Yeah. There is

not about six armed robots. Yes. Coming

out of >> China is not about having a humanoid robot. It's about mimic it's about

robot. It's about mimic it's about integrating into human spaces and and kind of moving around where humans have been. And so there there's some case for

been. And so there there's some case for it. But in general there's it's very

it. But in general there's it's very easy to be 10x more efficient than a human being. We're we're very very

human being. We're we're very very inefficient in most of the things that we do.

>> Yeah. I think evolution has done evolution has over billions of years or maybe order of magnitude a billion years done a search through body space. And

there are lots of body shapes that aren't anthropomorphic humanoid bodies.

You know, more arms, more legs, more heads, uh lots of different formats. And

I I do suspect we'll we'll see to to See, I'm not sure if this is your dream or your nightmare, but we will see lots of different Cambrian explosion, lots of different body shapes tested.

>> All right, listeners call dream or nightmare. It's just the most effective

nightmare. It's just the most effective use case for trying to get something done.

>> Call call out to our listeners. I made

that on Nano Banana. Somebody make, now that we know about the woolly mouse, make Salem's perfect robot for turning the woolly mouse hair into sweaters for us and then send it to us. We'll put it

on the next pod. Okay, that's a hell of a prompt. All right. Uh, another form of

a prompt. All right. Uh, another form of robots are drones. And I just found this anti-gravity drone. That's the the the

anti-gravity drone. That's the the the name of this drone. It's manufactured by a company called Insta 360 in Shenzen.

For those you who don't know, Shenzen is really sort of the entrepreneurial hotbed in China. U I've visited many times. You can go there and every part

times. You can go there and every part and component you need uh is there uh to be manufactured. So check out this check

be manufactured. So check out this check out this video uh of an 8K 360 degree drone uh talk about marketing genius.

So, this drone user is using it with VR

goggles and he's on a platform suspended by a balloon at 5,000 ft altitude and

the drone is just flying a beautiful uh you know 360 view of him.

>> The dude standing on a platform suspended by a hotter balloon. That's

way more interesting than the drone.

That's ridiculous.

>> Well, it's it's like what are you going to do to capture someone's uh eyeballs, their attention, right?

>> You know, I think Sem is on to to something here. Drones are a commodity,

something here. Drones are a commodity, but the the experience of being on a hot air balloon at altitude in a VR headset controlling a 3D drone, that that's got to be some sort of consumer experience

that one could build an enormous business out of. Maybe that's more interesting than the drone itself.

>> Yeah.

All right.

Well, all right. Let's move on to our next uh story in the robot.

>> You have the VR headset. Why do you need to be suspended up at 5,000 ft? That

makes no sense.

>> Well, for latency, right?

>> You want to see yourself suspended on the balloon at altitude. It's more

exciting or something.

>> All right, let's go to our next robot story. Uh, and this is robotically uh

story. Uh, and this is robotically uh automated vertical farms, which is an important part of our future food chain.

So, of course, out of China once again, and uh what we're going to see here are these massive vertical farms uh that are operating 24/7.

um basically growing at the perfect uh you know light frequency at the perfect soil

and and uh drip irrigation pH and it's being you know the AI is checking to see if it's ripe if it's ready for harvesting and the robot arms are harvesting and this is going basically

24/7 uh in a city near you. I mean this is one of the futures you know stem cell grown meats and vertical farming that helps us bring food to the individuals.

I don't know if you realize this guys but like half the cost of a meal that you have is food miles transporting the food uh from you know sort of

Argentinian beef or Chilean red wine or >> the average the average meal in the US travels 2400 miles to get to your table.

>> Yeah. Um this is something really this is something kind of incredible. We've

been tracking this for a while. Um you

know we've crossed over into um economic efficiency for uh farming and agriculture and food production. This

calculation I've seen that's the most startling is if you took 35 skyscrapers in Manhattan turn them into vertical farms that would feed the entire city sustainably. So you think about the food

sustainably. So you think about the food security u logistics trucking all of that stuff and when you can automate the entire farm the yield is something like 7 to n times what you can get with

horizontal farming because you can give exactly the right frequency of light uh that you can d by the way uh you save 99% of fresh water and 70% of our our fresh water goes to agriculture so you

don't need a lot >> and no pesticides no fertilizer all of this stuff the benefits are kind of incredible so we're going to see vertical farms next to every restaurant

uh over time just feeding the restaurant. This is amazing stuff.

restaurant. This is amazing stuff.

>> Yeah, >> it's probably also just quickly worth pointing out that video to to my knowledge was actually put out by the Chinese government and this is a a new

form of soft power, soft influence broadcasting these these visions presumably ground truth accurate but presumably of radical forms of

automation. I think we're going to see

automation. I think we're going to see many forms of propaganda, soft influence as showing these amazing tech demonstrations of robotics in action start to hit the internet.

>> And by the way, a humanoid robot makes no sense in that factory. Just

>> agreed. But a humanoid robot does make sense in this next story again out of China. Uh China is testing retail

China. Uh China is testing retail automation with humanoid robots running the shops. Right. So what do we have

the shops. Right. So what do we have here? you know, you're walking by, you

here? you know, you're walking by, you look inside, you don't see humans, you see a robot behind the table, behind the desk, and you know, I want to go in and check it out. So, um, this is the rise

of the robotrun convenience store, uh, taking humans out of the loop. Uh we've

seen Amazon do a version of this, right, with their Amazon Go where you walk into the shop and you just pick up anything off the shelf and there's cameras, you

know, noticing what you took and noticing what you put back on the shelf and then you're automatically rung up as you walk out. Uh but here we've got a

twoarmed, two-legged humanoid robot doing the the store clerking. Um, I I do think that this is going to be viewed as sort of like the atomic vacuum cleaner moment of 2025. Like, do do you really

need a humanoid robot in a convenience store? No. Probably there's more

store? No. Probably there's more ergonomic solution like as you say, Peter, Amazon's just walk out technology on the one hand. On the other hand, I would love to to live in a world where every convenience store is filled with

humanoid robots in the US doing this as well.

>> I I think it's fun. I mean, I'm sure we'll see this I'm sure we'll see this this year as soon as uh as soon as 1x with their Neo Gamma or Figure. And

we'll be visiting Figure at the end of January to record our next podcast with Brett Adcock. I just spoke to him

Brett Adcock. I just spoke to him yesterday.

>> Uh super excited about going and seeing behind the scenes there.

>> Two two counter predictions. One is I think this takes at least 5 years to have a convenience store operator with a humanoid robot. And by the time that

humanoid robot. And by the time that five years arrives that we won't need convenience stores anymore for various other reasons.

>> Ah, interesting. Everything is being conveniently taken to you by a drone.

>> Drone delivered.

>> Yeah.

>> You know, with Brett Edcock, maybe he'll let us go behind the scenes for real, like into the factory because with 1X, you know, there's too much proprietary stuff. They wouldn't let us do it. But

stuff. They wouldn't let us do it. But

if they cleaned up a little bit, maybe we could have done it. But it's

incredible when you go back and see the the actual robot construction. It's h

god if we can get footage.

>> We went back we went back and saw it but we couldn't bring the cameras back there is what you were saying.

>> Yeah. Yeah. Too many secrets.

>> Another story here back in the US.

Boston Dynamics announces its plan to ship automotive volumes of humanoids. Uh

and this is uh from their lead uh their product. I actually interviewed the CEO

product. I actually interviewed the CEO uh at FII. So we're owned by Honda Hyundai for a reason. We can ship automotive volumes of humanoids. So

there's a billion cars uh right now out there and these are being manufactured at you know tens of millions. Uh imagine

well we've talked about this Elon plans to do this Brett Adcock plans to do this. We've heard this from Brent Borick

this. We've heard this from Brent Borick uh now we're hearing this from Atlas right the ability to manufacture uh at the millions and tens of millions robots building robots.

>> We don't need billions of cars. We do

need billions of humanoids. Yeah. Two

armed humanoids, Sem. Two armed

humanoids.

>> Okay. Well,

>> don't be arrested.

>> I'm staying silent on this one.

>> Uh uh here's a story that's fun. Um

years ago, uh I had the pleasure of meeting an extraordinary entrepreneur, Eric Mijigovski, who built the Pebble Watch. And uh he did this on uh on a

Watch. And uh he did this on uh on a crowdfunding platform. Remind me which

crowdfunding platform. Remind me which one it was. Um it was Kickstarter. Yeah.

He built he was running out of money.

>> Yeah. He was running out of money and he had like 3 months of cash in the bank.

He was able to get funding for his Pebble watch.

>> And so he goes on Kickstarter and he says, "Hey, if you want one of these watches, uh, fund me." And he went from uh from one problem of not having enough

money to another problem. I forget how many orders he had. I

>> I'll I'll So Eric's a fellow Waterlue grad. Um and he uh was running out of

grad. Um and he uh was running out of money as you say even coming through Y cominator no investor in Silicon Valley he talked about 20 plus and now nobody would fund it because hardware was kind

of a bad word back then so he puts it up on Kickstarter trying to raise a hundred grand to build a prototype of his watch gets $10 million worth of orders.

>> That's right.

>> U and it's an important point because it tells you two or three things. One, the

investor is wrong. Fine. Secondly, if

you can do this, why do you need the investor at all? But the third thing that I think is the most powerful and one of the big inflection points, we talk a lot about this in exponential organizations is that now that you can

do this type of Kickstarter type thing, you can actually get market validation for a product without build before you build a product.

>> And we've never have been able to do that before in consumer uh hardware or consumer products. So this is an amazing

consumer products. So this is an amazing inflection point. Sony is actually

inflection point. Sony is actually launching anonymous Kickstarter campaigns and then funding the winners because it's their product development has not been the greatest over the last couple of decades. So they're kind of

tapping into this modality which is really powerful. So Eric goes from

really powerful. So Eric goes from having one problem of not having money to another problem which he's got to deliver now on $10 million worth of orders. So he literally takes the first

orders. So he literally takes the first plane out of the US to Shenzen and and basically builds the manufacturing chain in China uh to deliver this. Uh and it

was a great watch. Remember having I gave it out at Abundance 360 years ago when it a decade ago, but then Apple Watch came out and sort of crushed the

marketplace. Well, uh Eric's come back

marketplace. Well, uh Eric's come back and he's got something called >> pivoting to AI.

>> Yeah. The Pebble uh smart ring. And for

75 bucks, you wear a ring that's got one purpose. It's got a small little

purpose. It's got a small little physical button on it. And when you press the button, a microphone records whatever you want. So this is, you know, you remember like waking up in the middle of the night like remembering

something. You just push your ring and

something. You just push your ring and you whisper into your ring. Or you're

meeting with somebody, you walk away from your meeting and say, "Okay, I need to call, you know, XYZ as soon as this is over." And it's sort of uh, you know,

is over." And it's sort of uh, you know, reminders. uh and it's notes that go

reminders. uh and it's notes that go into your AI model. It has one purpose, right? This is is not, you know,

right? This is is not, you know, tracking your heart rate or your sleep.

It's tracking uh sort of uh bits that dribble out of your out of your thought during the course of a day.

>> I I love and critically like the where does the voice go? The voice goes from the ring to an ondevice on your phone hosted large language model that then

transcribes and analyzes. So what is this really doing? This is really to to the extent that a a ring stays on you almost all the time. This is about adding a button to the human body that

enables you to speak to a large to a foundation model that's also on your body. And so question to uh to to the

body. And so question to uh to to the moonshot mates here. How long until it's not just a button on your body that enables you to talk to a foundation model, but you're you're swallowing

foundation models? How long to the first

foundation models? How long to the first edible foundation model? Well,

injectable or sub subdermal.

>> You think it'll be injectable versus edible first?

>> Uh, well, yeah. I mean, if you're if it's edible, it's going to pass through your elementary canal all the way out to the other end.

>> So, I I want this, you know, there's interesting. There's part of the skull,

interesting. There's part of the skull, right, the mastoid bone in the back behind your ear. That's this hollow area of uh of of of

bone. I think it's a great place to

bone. I think it's a great place to implant a a permanent um uh you know microphone and speaker. Uh yeah, that's my prediction. We're gonna be implanting

my prediction. We're gonna be implanting a microphone speaker at the back of your head.

>> That was directly on Shark that exact thing was on Shark Tank and Mark Cuban vomited.

>> Really?

>> You can iterate hardware much faster outside the body than inside the body. I

don't think it'll be invasive for a while. Yeah, I think we'll see

while. Yeah, I think we'll see swallowable swallowable foundation models in the next two years.

>> Bluetooth like just Bluetooth in and out of your uh body to your phone.

>> Bluetooth but critically locally hosted.

Very locally hosted.

>> Okay.

>> All right. A few subjects, a few a few topics on space here. Let's move us along, guys. Chile becomes the first uh

along, guys. Chile becomes the first uh Latin America country to enable Starlink direct to sell. Uh so I mean listen,

Starlink is such the killer app uh for for SpaceX and the ability for him to potentially bypass the current phone industry which I mean tens and hundreds

of billions of dollars has been put down in terms of uh of uh you know G4 and G5 level distribution networks now to be

bypassed by Starlink. Crazy. Um, but

this is what I find this next story.

Take a listen. I mean, can you >> can I just go back to that? Can I just go back to that just for a sec, Peter? I

think this is something a very big deal because, you know, throughout history, this is the failure of government. The

UN should have launched something like Starlink. You know, they should be

Starlink. You know, they should be launch.

But they're fundamentally unable to and it needs private sector to do this type of stuff. What I find incredible is the

of stuff. What I find incredible is the demonetization and the dematerialization of technology allows now a private individual to do something like this

that changes the world completely uh in a such a powerful way and you kind of can say well governments just step out of the way and let private sector do everything going forward right because

it'll navigate most of this with light regulation uh we can navigate most of this stuff now so I'm really really excited by this >> okay can I ask you guys a question because I was trying to look at the data

behind this. You know, the idea of

behind this. You know, the idea of orbital data centers wasn't in the conversation how long ago. I mean, we weren't talking about this a year ago.

We weren't talking about it 9 months ago.

>> It's the last guy the guy at Abundance 360 >> uh March published a paper on this about 14 years ago and if you were reading Incelerondo in which case you had the blueprint for everything we're seeing

now.

>> Sure. But it wasn't.

>> But no, but no. March a year ago, one of your guy, one of your abundance 360 guys was talking about it and he was going to do Bitcoin mining in space at that point in time and everybody thought he was insane. And we also thought we couldn't

insane. And we also thought we couldn't do the cooling. So that was only March a year ago. So that's nine months.

year ago. So that's nine months.

>> But there's a >> So I know at that point it was nothing.

>> Yeah. But the last 6 months, really the last four months, all of a sudden, every single player, we've got companies out of China. we saw at the last pod. We

of China. we saw at the last pod. We

have now a company out of Europe and we have a dozen companies in the US. And

then I found this video clip which I found fascinating because Google was not discussing it a few months ago but here we are. Listen to Sundar.

we are. Listen to Sundar.

>> Yeah.

>> How do we one day have data centers in space so that we can better harness the energy from the sun. You know that is 100 trillion times uh more energy than

what we produce in all of Earth today.

So we want to put these data centers in space closer to the sun uh and and I think we are taking our first step in 27. We'll send tiny uh tiny racks of uh

27. We'll send tiny uh tiny racks of uh machines uh and and have them in satellites, test them out and then start scaling from there. But there's no doubt

to me that a decade or so away we'll we'll we'll be viewing it as a more normal way to build data centers.

>> I never thought I'd hear Sundai Sundar say tiny racks of machines. That's

hilarious to me.

>> I just love the school boy level excitement he's got there. You can see him actually grinning. He's like, "Oh, data centers in space. This is amazing."

>> I I love the AI AI generated. The big

banner on top of that video is AI generated. It's like we're going to

generated. It's like we're going to we're going to always tell you that this scene in deep space is AI generated as if as if it was not. Um the the reason the reason Peter why you know I mean

even though I I maybe a little bit glib saying well if you had read accelerando this would have been obvious to you almost 30 years ago on the one hand the reason you know that this is a sudden phase change in in the way the industry

works is Google's plans this is public information the Google plan to launch these so it's TPUs first of all Google's launching TPU based data centers

obviously are on planet satellites planet labs it's not Google's own satellites it's planet labs. So, so you know, if Google's hitching a ride via

SpaceX on planet satellites, this is all of a sudden. I I I'll say that second point. Sun-synchronous orbit is about to

point. Sun-synchronous orbit is about to become very very crowded.

Sun-synchronous orbit is is is a a low Earth orbit that satellites that want to always have sun exposure, never pass behind the Earth, never be in the shadow, always have solar power for their panels. It's going to be very

their panels. It's going to be very crowded.

>> It's a real estate. It's a limitation.

And there, you know, there currently is limits on how close you can get to other satellites. Um, that's going to be a

satellites. Um, that's going to be a real it's going to be a real challenge because we've got, you know, a dozen companies all wanting to do this at the same time. It's going to be a race and

same time. It's going to be a race and how the FAA, which governs this, is going to decide who gets the territory, who doesn't. In geostationary orbit, uh

who doesn't. In geostationary orbit, uh there's a very clear demarcation of I own these orbital slots over my country, but low Earth orbit doesn't have that

situation.

>> Peter, you're making the the case for the Dyson swarm. Again, the Dyson swarm.

So, we move out of geo, we move out of LEO, and Sundar himself in in this clip was saying, we want to get closer to the sun. So, we're we're sleepwalking

sun. So, we're we're sleepwalking straight into the Dyson swarm. Well,

Peter, to your prior point too, this was science fiction a year ago and now suddenly it's mainstream among the top CEOs in the country. How does that happen? But, you know, you look at Elon

happen? But, you know, you look at Elon and his credibility. You look at, you know, Alex, your credibility. A lot of things that were impossible a year ago are going to be very easy a year from today. And if your track record of

today. And if your track record of predicting them is is near perfect, then, you know, the credibility of these crazy sounding ideas immediately catches on. And you're going to see a lot more

on. And you're going to see a lot more of that I think because the the you know the capabilities are are exponentially growing but you know some of these things are truly hairbrained and some of them actually are

>> is there line of sight on solving the heat dissipation problem for these satellite data center?

>> Yeah and for radiate in the direction of the cosmic microwave background. So

>> yeah, the final answer shocked me, but for every square meter of solar panel, it only takes one same square meter of radiant cooling, radiative cooling, which really surprised me. I thought it

would be we we estimated on Gemini, which was wrong. Uh at 10x, uh you need a 10x more, you know, area. And it was just wrong. It's it's cooling at 1x and

just wrong. It's it's cooling at 1x and I don't know how they and it's all aluminum based, so it's not weird weird expensive metals or anything like that.

So yeah, point it into deep space like Alex has been saying forever and it it's for whatever reason just flat out working.

>> So most of >> I took all of the comments from our last two pods and ran them through one of the LLMs and said, "Okay, pull out the the

most interesting AMA questions. Here we

see a list of 10 of them, gentlemen. Um

uh let's pick out a few to answer. I'll

start with one which is how do you make these space-based AI data centers fault tolerant right there's sunspots there is the potential for you know disruption

from a uh even from an EMP at some point uh god forbid uh any ideas on making them fault tolerant >> those are two very different faults >> yeah yeah both there are lots

>> disruptive >> there are lots of different failure modes so I I do think this is another multi-billion dollar company that someone should start. There are many

techniques right now ranging from uh switching from silicon based electronics to to maybe uh other semiconductors.

Yeah. like gallium arsenide, uh, 26 or or 37 semiconductors that are more fault tolerant, have different band gaps to designing just electronics that are

intrinsically at at the at the design level better able to tolerate faults to uh just doing what what right now is a standard protocol, which is if if

there's uh if there's a solar storm or bad space weather, you shut down or you switch them to to safety mode. So that

there are lots of partial solutions here. To my knowledge, there isn't like

here. To my knowledge, there isn't like the definitive industry standard solution of what happens if you're in the middle of a training run.

>> I just hate to think about the idea of your all the data centers in orbit shutting down because there's a solar storm for the next 12 hours. We're

getting hit by uh by alpha particles.

>> But how do we solve that in general?

Like if there's bad weather or a blackout on Earth, you have diversification. So, so if anything

diversification. So, so if anything again like let's put space-based AI data centers throughout the solar system. So

if there's bad space weather in one part, there isn't in another.

>> That's a great point actually. I bet

earthquakes and tsunamis and hurricanes are much bigger problem than solar storms. >> All right, let's pick another one of these.

>> Hey, just just to make a point though, there's a there's a kind of a flaw in the question, too, because when you have Skylab up there, you want it to be up there for 20 30 years and you don't want it to get hit and destroyed or anything.

But this space-based data centers need to be replaced every three years with new chips.

>> And so they're not it's a constant launch recycle launch recycle launch, recycle thing.

>> Somebody EMPs the entire thing and destroys it, then there's a war, of course. But it was going to get replaced

course. But it was going to get replaced in a three-ear cycle. Anyway, it's not it's not like Skyab.

>> Interesting. One of the things we did in the uh uh for planetary resources when we're looking at asteroid mining, we we set up the the software so we would

expect constant disruption. Um and the system we focused on rapid restart of the system so it would boot up extraordinarily fast. Um all right.

extraordinarily fast. Um all right.

>> Can I tell a quick story here?

>> You can, but I want you to choose one of these uh one of these AMA questions also.

>> Sure. Um, you and I were sitting in a hotel in Dubai and Richard Branson walks by and he said, "Hello." And we grabbed a quick drink and he said, "Peter, how's

my investment in, you know, planetary resources going and you described that how it was going? It had NASA contracts, etc." And Richard turns to me and goes, "This is why Peter's interesting because

in a random hotel lobby, I'm suddenly having a conversation about asteroid mining off planet just like this. This

conversation happens nowhere else in the world except with Peter. We love you so much.

>> It was fun. All right, Sel, pick a question here. Is this question bingo?

question here. Is this question bingo?

>> Should we expect G20 level initiatives for UBI within the decade? I would hope it would be within a year. Uh it needs to happen very very fast. I think it'll

force the conversation. But um uh >> universal basic right universal replaced soon by UBS, universal basic services.

Uh but I think you shouldn't expect much from the G20 period. I think that's the flaw in the question. But in general, we're going to expect to see this uh rolling out in a pretty rapid way. Lots

and lots of experiments being done all over the world on this because they have to do we have to move to something like that. The social contract is completely

that. The social contract is completely being wiped out in the current model.

>> Dave, why don't you pick a question next?

>> Uh okay, I'll take number one. How can

AI lift up those who aren't international entrepreneurs? I I I think

international entrepreneurs? I I I think one listen to the podcast, get subscriptions, play with the tools, and then brand yourself as an AI expert within your company, you know, or if you're not going to be an entrepreneur,

that's fine. You know, the demand for

that's fine. You know, the demand for this knowledge inside regular corporate world is going to go through the roof in 2026. And if everybody around you knows

2026. And if everybody around you knows you're the AI person, and also don't be intimidated. The the historically, if

intimidated. The the historically, if you wanted to be a software god, you needed to be very, very softwary. That's

not true with AI. It's it's much more intuition based. You can build virtually

intuition based. You can build virtually anything with voice prompts. Uh it's

just knowing how it applies in your industry will separate you. So just jump in the game.

>> Yep. Amazing. Uh

Alex, do you have one?

>> I I'll take question number four for 10 trillion.

>> Uh is pure scaling enough or what comes after? Uh so so I think the answer I

after? Uh so so I think the answer I think it's a trick question. I I think pure scaling probably is by pure scaling I I I I'll construe the question to mean

we freeze all algorithms. No new algorithms are allowed to be developed in AI but we're allowed to shovel more and more compute especially inference time compute into the existing

algorithms. I I do strongly suspect that if we froze all the algorithms we have today, no new architectures, but we get lots more compute coming online. The

existing architectures combined with scaled compute will be enough to give us AI smart enough to tell us what a perfect algorithm would be to the point

where we get our uh highly coveted AI researcher recursive self-improvement the final algorithm and we can just ask our scaled algorithms what comes after.

So in in in summary my answer to question number four is yes. I I think probably pure scaling is sufficient. Is

it is it all that we need? No. Of

course, algorithm in the real world, algorithmic development is continuing and we're going to get both. But could

we live with pure scaling at this point?

My guess is probably yes. All right,

let's answer one more here. Number

three, how do the Moonshot Mates prepare daytoday for each podcast episode? Uh

yeah, I think we we can share that. So,

uh uh let's see. Alex, you're constantly providing uh the team with a incredible list of all the breakthrough stories you're searching. You're probably

you're searching. You're probably generating how many how many AI stories per day do you think you generate for us to look at?

>> Oh gosh. Um or order of magnitude 20 important stories per day. I'm also at this point like I spend so much time just reading reading uh primary sources,

archive papers, etc. living in the zeitgeist of of the moment because after all drink singularity comes around only approximately one time per planet. So so

it's a special time. Uh I I I do also at this point um you know probably should say uh I'm I'm also turning all of these stories in addition obviously to

research for this show into quasi daily newsletter. Uh just trying to

newsletter. Uh just trying to >> follow Alex on X. Uh he puts out some incredible uh daily uh uh sort of

interesting AI rants I would say or AI.

>> Follow me on X follow me on LinkedIn.

It's it's a genre I'm trying to popularize. I'm calling it sigh nonfi.

popularize. I'm calling it sigh nonfi.

It's it's written in the style inspired by Charlie Strawk Salando others written in the style of science fiction except it's all grounded in what's actually happening. So, Alex generates uh you

happening. So, Alex generates uh you know on the order of 150 stories a week. I'll generate

probably 20 or 30 stories a week. We get

some from Salem, some from Dave. All

this gets sort of put into different categories. We then sort of cut it down

categories. We then sort of cut it down to the top uh 30 stories. I typically

spend uh about 10 hours sort of playing slide shuffle working with Jan Luca and and Dana. uh who are incredible members

and Dana. uh who are incredible members of our team and then we do research on those stories uh to get the details and and think about them and uh I'm probably

spending a good 15 hours of my week focused on this. How about you Dave and Seline?

>> Well, everything you just said, you know, I lean entirely on Alex's internal feed, which now you can get on X. You

know, it's a digest of the same thing that's brand new as of the last week or so, so take advantage of it. Um, but

I've been reading that internally for what, a year now, I guess, or more. Uh,

which is very timeconuming, but I need to know it all. The only other thing I do is I route all the really big stuff over to the venture capital team and say, what are the business implications of this, which we need to know anyway to

run our venture fund? And then I try and bring those stories back into the moonshots feed so that we can talk about not just the technology, but what it means to investors, to business people,

to people with career planning and all that.

um I spend um I source a few stories but nowhere near as much as the rest of you but I think the I spend a chunk of time the minute you guys release the the deck I look through it and then find it's

changed again and so I have to restart again uh so I'm always playing catchup with the slides that you and then Peter on the last night you go God knows what you do but you change it all again and I

have to re I do re research it um I spend half a few hours a week looking up the term terms in the papers that Alex surfaces because half of it is Greek.

Um, and then I'll ask also my community me, my opening exo community. So there

there's a hive mind reaction to some of this which I think is very powerful similar to Dave asking his team.

>> Uh, just again to let our subscribers know >> though it's just sucking up more and more time per week, but it's such a important thing.

>> It's the most thing we do. Come on.

Super fun. But what no one ever warned you of Seem is like the the singularity of covering the singularity. It's a

singularity of time suck.

>> It's it's just it's a black hole. It's a

black hole. Dyson swarm forming around my own head.

>> Singularity wants your attention.

>> So we we hope for all our subscribers and listeners that you guys appreciate.

We put a huge amount of work because we care about this deeply. Uh

>> I need to give a quick plug quick plug.

Um I'm doing my meaning of life session next week. We've already we're almost

next week. We've already we're almost sold out. Uh it's going to be pretty

sold out. Uh it's going to be pretty amazing. It's going to go for several

amazing. It's going to go for several hours starting 11:00 Wednesday. Come

armed with any question you have about life and judge me by how well this framework answers that question. Boom.

>> All right, let's get to our outro music here uh from David Drinkall. I think

it's the perfect name for a drinking game.

>> That can't be real. Oh my god, it's a bingo card every >> And so this is a bingo card. Uh, and you can see tile the earth. Uh, uh,

>> have our glasses of water ready.

>> Yeah, I do. Cybernetics. Okay, let's

listen.

>> Where's the humanoid robot entry?

>> Uh, six arm humanoid robots. Robots down

at the bottom and cloud computing on the bottom left. Okay, let's take a listen

bottom left. Okay, let's take a listen to uh to David's uh outro music. Thank

you, David, for producing this for us.

And again, if you're listening and you are creating music videos and you want to create an outro song for us, send it over. We'd love to we'd love to listen

over. We'd love to we'd love to listen to it and perhaps select it. All right,

let's take a listen.

Take a sip when Peter says go try and gentlemen two if he name drops just got back from again

drink when Alex says better benchmarks abandoning bench and finishing glass if he whispers dyson swarm at last

moonshine lingo Sorry.

The earth with comput bag chug sip when someone says we'll cure every disease.

When they mention startups or singularity drinking drops insert my usual objection

and the phrase red successively moonshot bleed training go Shopping quick.

One sip for every code red. Two for

humanity's last exam. Three when Alex says solving math. Yes, that old plan.

Big up when anyone says universal basic services.

Pass out when Peter yells. That's a

moonshot. Ladies and gentlemen, drop.

>> All right.

>> Amazing.

>> That is awesome.

>> Yeah, it's a moonshot, ladies and gentlemen.

>> You know, this is again a tribute to the creative nature of all of our subscribers. Thank you guys. And also

subscribers. Thank you guys. And also

the tools out there to allow you to do things like this, >> guys.

>> Amazing. Have an amazing weekend.

>> Yeah.

>> Super creative. Take care, folks. Every

week, my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from

ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff, only the most important stuff that matters, that impacts our lives, our

companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends

10 years before anyone else, this report's for you. Readers include

founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want

tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to

it. To subscribe for free, go to dmmandis.com/tatrends

dmmandis.com/tatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode.

Loading...

Loading video analysis...