AI CEOs Come Online: Sam Altman's Replacement Plan, Job Loss & 'Solve Everything' Launches |EP #230
By Peter H. Diamandis
Summary
Topics Covered
- AI CEOs Already Run Billion-Dollar Firms
- Release Cycles Shrink to Continuum
- Job Loss Signals Task Evaporation
- Shape Superintelligence Toward Moonshots
Full Transcript
When do we see a billiondoll revenue company being run by an AI CEO?
>> I think it's pretty likely that there already is such a company right now.
>> US jobs disappear at the fastest rate this January since the Great Recession.
>> This is not really a recession. It's
literally tasks being evaporated in front of our eyes.
>> This shows us Markx was wrong. We knew
that anyway. We have the capitalists who are being first in line to be replaced by the automation.
>> For me, this is the social contract.
Little by little, um it disappearing and pixelating away.
>> Alex and I are going to be unveiling a paper we've been working on for some months. It's called solve everything.
months. It's called solve everything.
How do we get to abundance by 2035? The
next 18 months to 2 years are going to set the rules down for the next century.
We're about to have this conversation.
Uh, the paper/book is nine chapters. Are
you ready to jump in?
>> No one expects the singularity. Peter,
I'm ready.
>> Now that's a moonshot, ladies and gentlemen.
>> Everybody, welcome to Moonshots, another episode of WTF Just Happen in Tech. I'm
here with my incredible moonshot mates, DB2, Seem AWG. Uh, guys, it is just accelerating. In fact, this is the
accelerating. In fact, this is the second WTF episode we're recording this week just because the news is just incessant. Uh we're going to have this
incessant. Uh we're going to have this this podcast today in two parts. Uh
first we're going to be covering the news that's breaking a lot of it really important news. The second part uh Alex
important news. The second part uh Alex and I are going to be unveiling a paper we've been working on for some months.
It's called solve everything. How do we get to abundance by 2035? This is the equivalent of our of the paper released uh uh situational awareness in AI 2027.
This is our view of where things are going. So in the second half, get ready
going. So in the second half, get ready for this. Uh excited to present it. Uh
for this. Uh excited to present it. Uh
it shows the brilliance of AWG.
I'm in Sun Valley at the moment speaking at Tony Robbins Platinum Finance event about AI and longevity. Dave, you're
back at MIT. See, where are you pal? Uh,
I'm home in New York. Uh, waiting for the warm weather to hit and get us above zero for one second.
>> It'll take six months.
>> Wondering why I ever left India.
>> No. Why you left Florida is the correct answer. Uh, and Alex looks like you're
answer. Uh, and Alex looks like you're in your normal setting. uh some AI >> which the the audience is convinced that I live in VR or maybe a hotel and you wouldn't I actually you probably would
believe the YouTube comments on the flowers and the lamp and their purported invariability.
>> Yeah. And and I have taken >> you did point out that the orchids have changed. Actually
changed. Actually >> the orchids have changed but I'm getting like flower keeping advice in the YouTube comments at this point. people
telling me to put ice cubes in the orchids >> and and I have to say I'm having so much fun with Claudebot. Um the lobsters have begun to become part of my life inside
and out. Uh so I'm like, you know,
and out. Uh so I'm like, you know, bringing them into the conversation here. I got jealous, Dave, of the
here. I got jealous, Dave, of the lobsters in your in your view. So
>> I'm holding the lobsters back for now.
>> We're having a tribles moment.
>> There's actually more. I I put some of them down.
>> It is a tribles moment. You're
absolutely right. Uh hopefully it's not the trouble with the lobsters. All
right.
>> No, these triples are economically productive.
>> Okay. Well, these are and they're so much fun. I I can't wait to express the
much fun. I I can't wait to express the level of uh of collaboration I'm having with my Clawbot, which I've named Skippy. If anybody knows where the name
Skippy. If anybody knows where the name Skippy came from, put it in the comments. It's my favorite AI from
comments. It's my favorite AI from science fiction. All right. This is the
science fiction. All right. This is the number one podcast in AI and exponential tech. Getting you future ready, getting
tech. Getting you future ready, getting you ready for the supersonic tsunami heading our way. And with that, let's jump into the news. Uh, first off, top
AI news. I love this article. Uh, this
AI news. I love this article. Uh, this
came out from Forbes. Uh, Sam is the cover child, cover boy for forbes this week. And the question is, will Chat GPT
week. And the question is, will Chat GPT become the CEO of OpenAI? So this is what Sam said, you know, pretty simp.
He's said he doesn't want to be the CEO of a public company. And honestly, being the CEO of a public company is a pain in the neck. So taking it further, he says,
the neck. So taking it further, he says, you know, if the goal uh for artificial intelligence is to become so advanced that it can run companies, he asked,
then why not run open AI? I would never stand in the way of that. He says, "I should be the most willing to do that."
I find that fascinating. You know, when will we see an AI actually running a significant economic engine like this?
Dave thoughts.
>> This is no joke actually because this is board meeting week for me. So, I have backtoback Manurva today, the cash cow from Dartmouth, then tomorrow the $2 trillion asset manager, then the next
day the public company. Ever all back toback. And in every one of those
toback. And in every one of those meetings, this is the topic, not replacing the CEO, but all of our plans are now in written form that we can digest with AI. So, we're trying to
track every single movement within every company in documents digestible uh by AI. And then if you ask the CEO, well,
AI. And then if you ask the CEO, well, what what do you do? It's mostly set course and set strategy, which is a very small fraction of total time. What what
else do you do? What's the other 90% of time go into? and how much of that can be done by AI today and the answer is a lot which is great because then the CEO is unleashed to be even more effective at setting strategy and also promoting
the strategy. So I don't think that
the strategy. So I don't think that part's going away anytime soon. But the
other 90% is really just, you know, inbound information getting routed into into the organization to do these specific tasks, which is outbound. It's
documents in, documents out now.
>> So we're really gearing up now for this.
>> Sim and I have been talking about this forever. When are we going to have AI
forever. When are we going to have AI board members, AI executive teams, and eventually AI CEOs? Thoughts? Yeah,
we're seeing this shift from as if from AI as from a tool to being a governance actor, right? We already have an AI
actor, right? We already have an AI minister in Albania. And initially these are kind of like toy things, but in reality this is very powerful stuff because uh an AI scanning can be
scanning millions of documents at a company in real time has a much better sense of what's going on in the company than any human being can possibly do.
Right? A typical loop in a big companies the senior management sets some direction or policy cascades down uh the the at the coldface the people do it um it takes a long time to get down there
you have Chinese whispers by the time it's down there they're doing some activity that nobody at the top even knows about and then they start doing stuff report back up to the top you've got another set of Chinese whispers and
by the time data gets to the top it's diluted so much and you lose all the intelligence in the middle right and so AI is going to come through and break through radical uh create radical
opportunities to do this. And I think what'll happen is we'll see a pure AI organization at some point soon, but they won't look efficient. They'll look
literally alien and and that's fine. I
think I it's one of these where you can't wait for it to happen.
>> And then you tell you can't compete against that because of time dilation. I
think that you know I asked Alex for some help with the strategy of a big company earlier this week and one of the points he made in his answer which was brilliant of course. One of the points he made in his answer was time dilation.
You know, if you look at banks and insurance companies and uh you know, practically anything, it doesn't change strategy more than once a decade, you know, or once every millennium. Uh now,
in the age of AGI, the the course corrections are going to be, you know, it'll go from decades to years to months to weeks to minutes >> all over the next couple of years. We
have we have a whole section in the first exo book called death to the five-year plan. Right. Because today by
five-year plan. Right. Because today by the time you finish your five-year plan, it's out of date. Then you spend all your time maintaining the plan.
>> Exactly. Exactly. And so so the amount of information that you need to assimilate to do those course corrections is beyond human.
>> There's just just so much going on. If
you read Alex's daily feed, you know, the amount of change going on, if you compare it day over day, you can see the expansion of the rate. And so it's just so much happening. It's beyond human
assimilation at some point. So you have to have an AI CEO to assimilate it and even suggest the course corrections.
>> And Dave, you said it over and over again, right? The role of the CEO in
again, right? The role of the CEO in part is to understand what his or her employees are doing and if they're making the most efficient use of their
time and their resources. And it's all knowable, but just not by the human right now. But the AI can be giving you
right now. But the AI can be giving you an understanding of this person's operating at 50% of capacity or this person's not making the make best use of of their of their resources mechanics
which AIS will do very well I think where you have the seauite and the CEO they'll be holding the purpose hence the MTP etc that's they need to hold the direction and what problems the company or organization is actually trying to
solve.
>> Yeah. So so there's two sides to this.
One of them is outbound strategy. You
know assimilate all the data from the world. The other is inbound. what are
world. The other is inbound. what are
all my people doing and why? And you
know, those are the kind of the two sides of being a CEO and and Peter just brought up that inbound side which you you emphasized. And I I think on that
you emphasized. And I I think on that front, you know, this is comp plan season, right? Beginning of the calendar
season, right? Beginning of the calendar year.
>> I'm I'm tying everybody's CEO comp plan to data gathering this quarter so that we have everything that's happening in the organization now. You know, Peter, you've been saying privacy is dead for a
long time. It's everything is knowable
long time. It's everything is knowable all of a sudden. And there's a whole bunch of mechanisms for that. I won't
even get into it because this will go too long. But but if you're a CEO or a
too long. But but if you're a CEO or a senior manager in any company right now, really focus Q1 on how do I grab absolutely granular information on what everybody's doing so that I can start to
feed it to the AI to get its opinion on whether these are the good >> stuff. Stuff is speeding up. Alex, when
>> stuff. Stuff is speeding up. Alex, when
do you think I mean to put a a sort of concrete objective on this? When do we see a billiondoll revenue company, not not valuation, cuz valuation skyrockets
through the roof when you pull two or three smart people together, but a a billion dollar revenue company being run by an AI CEO. What's the timeline for that, Alex? And what's your thoughts on
that, Alex? And what's your thoughts on these?
>> Probably several months ago.
>> Several months. You think there's an a billion dollar revenue company being run by an AI right now? I I think it's very likely that there is a billion-dollar run rate company being run by an AI.
Now, you said run by. I think there's probably a human CEO there for legal purposes and puppets meat puppetry purposes. But I I think it's pretty
purposes. But I I think it's pretty likely that there already is such a company right now.
>> And by the way, if you know of one, please put it in the comments. We'd love
to to hear about it and see it.
>> If you want to blow the whistle on me puppetry, you can blow it to Peter.
Yeah.
All right. Anyway, I I love this idea.
Um, you know, it's eating your own dog food. If in fact, you know, if in fact
food. If in fact, you know, if in fact Elon believes that we're going to have the smartest AI coming out of XAI and if OpenAI believes the same for its, you
know, GP, you know, Chat GPT6, whatever comes next, uh, it should be the CEO.
Um, >> I also I also think, if I may, Markx was wrong. This shows us MarkX was wrong. We
wrong. This shows us MarkX was wrong. We
knew that anyway. But this is another case in point. Look at what's happening.
The the story that unfolds here is we have the capitalists who are being first in line to be replaced by the automation. It's not the workers. We we
automation. It's not the workers. We we
see booming jobs for electricians and HVAC engineers. Their salaries are are
HVAC engineers. Their salaries are are booming and yet CEOs are first up to be replaced. So if anything, I would sort
replaced. So if anything, I would sort of take MarkX off the shelf if it was on the shelf at all. replace it with Moravex paradox which is again this paradox that tasks that are hard for
humans and easy for uh for humans are respectively replaced by easy for machines hard for machines machines are able to do complex calculations solve math it's pretty hard for humans there
looks like it's going to be easier for the machines to automate away CEO labor which is sufficiently hard for humans that it's well compensated and relatively scarce commodity to find
highquality CEOs And yet it'll take a few more years for the machines to do an amazing job at unskilled manual labor.
>> I for one cannot wait till the uh AI CEO overlords take over the world. Um I wish I could have an AI CEO taking over and running my company instead of having to do it myself. It's a pain in the ass.
It's hard up and running, pal.
>> Yes.
>> Yeah. You have to feed it properly, etc. It'll happen. But I just can't wait for
It'll happen. But I just can't wait for the speed of that to accelerate. By the
way, it's it's super fun the way we're going back to Claudebot as the as the de facto handle instead of openclaw.
>> Lobsters are the mascots of the singularity.
>> Lobsters are here to stay. Hey
everybody, you may not know this, but I've got an incredible research team.
And every week myself, my research team study the meta trends that are impacting the world. Topics like computation,
the world. Topics like computation, sensors networks AI robotics 3D printing, synthetic biology. And these
Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends newsletter
every week, go to dmandis.com/tatrens.
That's diamandis.com/tatrends.
All right. Uh staying with our OpenAI theme. This is incredible. This is about
theme. This is incredible. This is about feeling the speed of the singularity.
OpenAI achieved 70 70% time reduction between models. So openai released um
between models. So openai released um their release sequence has gone from 97 days to 29 days on a release cycle right
anthropic with their opus 40 and opus 4.6 six took about 73 to 75 days. Uh so
the concept here and Alex I think you or Dave mentioned it last time we're effectively heading towards a continuous deployment like it's continuously being
improved and whether you call it 6.7 or8 uh there's a continuous improvement.
Alex, thoughts on this?
>> I I do think we're moving toward daily and then hourly and then minutely releases certainly. I I I also want to
releases certainly. I I I also want to take a step back and try to understand why this is happening. Uh the the obvious factor it should be obvious is competition. There's there's leaprogging
competition. There's there's leaprogging that's intensifying between all the frontier labs. So some quantum of why
frontier labs. So some quantum of why we're reducing by 66% or so 70% the the release cadence is just due to
intensifying competition. That's the
intensifying competition. That's the boring explanation. I I think the more
boring explanation. I I think the more interesting explanation is that the technologies behind the releases themselves have evolved. So historically
when we were dealing with annual releases that was a world an era of pre-training when if if you want a new model you have to do a different architecture and you have to pre-train
off of a larger corpus with more compute. Those were the days of the
compute. Those were the days of the original chinchilla scaling or Kaplan scaling before that and that was a much slower world because if you wanted a new release you had to start all over again.
Then we moved with 01/strawberry which was sort of the herald for reasoning models.
>> Remember that that was ancient times two years ago.
>> Oh my goodness. Yeah. That was like so many singularities ago. So, so we moved to the era of reasoning models when it was possible through a process that used to be called iterated amplification and
distillation to take a pre-trained base model or baseline model and then cycllically generate a bunch of training data and distill from that to a child model and repeat the process over and
over again. And that post-training
over again. And that post-training revolution for reasoning models was much faster. Like it's much faster to to
faster. Like it's much faster to to post-train a model off of a corpus of synthetic data. And so release cycles
synthetic data. And so release cycles contracted. And I think now we're on the
contracted. And I think now we're on the the edge, probably slightly past the edge at this point, of a new era, call it the recursive self-improvement era, where the models are starting to rewrite
their own code. They're it's not just a matter of a model, a parent or teacher model, generating synthetic training data that's used for a child uh distill model. It's literally the parent is
model. It's literally the parent is writing the code for the child. And that
can be done even more quickly than just post- training. And I think it's just
post- training. And I think it's just going to get faster and faster until it's a continuum.
>> Yeah, it's going to accelerate like crazy. But also, we're in a window of
crazy. But also, we're in a window of time, a very narrow window of time right now where the very best technology is available to you. Uh like Claude gives
you their absolute best 4.6 and OpenAI does and and Gemini does. I would not count on that surviving post the self-improvement era. Right now, also,
self-improvement era. Right now, also, the Chinese open source models are pretty much right on par with the best of the best. They're slipping a little bit, but I think the window of opportunity to take advantage of that
and build something out of it is right here, right now. I really doubt uh two years from now that the best AI is going to be just log in and go here, here you
can have free access to it. And what'll
happen is it'll be you'll be deprived of it with the excuse being security and safety.
>> Interesting.
>> Which is true. I mean, it's pretty hard to deny, but you have a window of opportunity right now to be on the very cutting edge. If you don't take
cutting edge. If you don't take advantage of it now and get somewhere with it right now, I wouldn't count on that existing.
>> So, the models are going to go dark, right? It's going to be it's the secret
right? It's going to be it's the secret sauce is going to be kept internal to benefit those companies as they go into an allout battle.
>> Well, and even today, you know, if you talk to Noam Brown over at OpenAI, you know, he's working on the next generation internally, but it's only like three months in the future that, you know, he has access to. But 3 months
in the future in the era of self-improvement is like you know massively different intelligence level.
You know the the definition of three months of AI development you know two years ago one year ago and today that's that's the point of the slide I guess is like three months is like a lifetime of of difference in capability that they're
using internally versus what's you know available in the outside world. So uh
you got to expect that this is it's now or never to react basically and people are still hugely underreacting to the importance of what's happening right now.
>> Insane sim um I've got to kind of like the crazy antithesis of this. We're working with a large monster European corporation and
we showed them something that can give them massive uh uh impact straight to the bottom line and the response was oh this is fantastic. Let's bring this to
the planning meeting in October, right? And you're like,
right? And you're like, >> and you're like, >> I can't even see past three weeks. And
you're like talking calendaring something 10 months down the line for something that's going to have a demonstrally, you've just agreed it's a demonstrabably huge impact. So this is the impedance mismatch between legacy,
but there's a there's a story for me this story mostly a bit of a yawn. Okay.
And the reason I say that is been we've been seeing this in the fastmoving tech space for a while. Remember Raymond
Macaulay was the chief scientist at um Aluminina, right? They were making
Aluminina, right? They were making high-speed gene sequencing machines.
Love story. And it turned out that the life shelf life of a gene sequencing machine was literally uh 8 months.
That's was the sales cycle before the next iteration came out. But it took four years to build one of these design and build one of these machines. So they
had to have four parallel production uh sequences in in sequenced at the right level so they could hit that 8 to 10 month shelf life sales shelf life right
so in the kind of high-tech world we've seen this pattern before but this brings it to software and makes it a continuous intelligent cycle >> mhm incred I mean this is the
singularity at play and again you know the theme that we keep on hitting in this podcast is this is the slowest it'll ever and the worst it'll ever be.
And it's accelerating um at a speed which is which is frightening uh frightening in that you know the four of us spend you know tens of hours per week
reviewing and learning and playing and trying to communicate it. And um it's only going to be something that my my claw bot's going to be able to keep up
with. And speaking of Clawbot, uh this
with. And speaking of Clawbot, uh this is Vision Claw. Lobsters just got vision aentic AI for meta rayband glasses.
Let's take a look at this quick video and and chat about what it means.
>> Hey Clab, can you help me add this into my Amazon cart?
>> Sure, I can help with that. I see the Monster Ultra Strawberry Dreams energy drink. I'll look that up to add to your
drink. I'll look that up to add to your Amazon cart.
It's added to your cart. Is there
anything else I can help with?
>> Cool. Thank you.
>> I love this cuz I want to have this capability for Skippy. Uh to be able to see what I'm seeing do what I'm, you know, support me across everything. This
is about accelerating sort of your minute-to-minute life and having your AI there as your sort of guardian angel supporting you. Um, you know,
supporting you. Um, you know, >> I'm I'm visually looking through um Open Claw at you guys and it's saying that you guys are kind of meattheads. Really?
>> Just a couple.
>> Peter, how how many times have you asked for Jarvis? You got Jarvis Christmas.
for Jarvis? You got Jarvis Christmas.
>> Yeah, I actually I I named uh my Claudebot Jarvis initially. I said
that's just too generic. I love Jarvis.
I write about Jarvis in all my books as sort of the ideal AI analog, but Skippy is uh is a more unique name for me. It
it really is here. And now all of a sudden, besides, you know, it's going to take in all imagery. It's going to be taking in all audio, listening to your conversations always. And people say,
conversations always. And people say, well, I, you know, I don't want to lose priv lose privacy to my AI. Well, guess
what? You're going to give AI access to all of your everything you're seeing, everything. It's hearing every
everything. It's hearing every conversation, every email. Because when
you do that, the value creation in your life is so great that not doing that is going to feel like you've ripped away uh all of your your mental capabilities.
>> Yeah.
>> One warning please for everybody here, everybody listening is watch, be very careful to audit the skills that you download to Open Claw because there's a lot that have uh viruses and other
malfeasants built into them already. And
so it's a very dangerous game out there.
>> There are protection layers coming on.
By the way, one thing I reached out to Alex Finn. We featured him on a previous
Alex Finn. We featured him on a previous uh Moonshots podcast. Uh remember when Alex had his uh his uh his lobster Henry call him out of the
blue. uh and and Alex has been doing
blue. uh and and Alex has been doing incredible work with with this and he's going to be joining us on one of our next uh podcast to talk about how he set it up, what security he's taking in
place and in particular, you know, rather than running it on the existing models, he's gone forward to set up, you know, uh Mac Studio and then download
Kimmy K 2.5. So you've got all that capability to resin on your machine. Uh
not costing you anything monthto-month, but we'll go into that in a in a future podcast. Excited to sort of share his
podcast. Excited to sort of share his vision and knowledge with everybody who's a viewership here. So getting
ready to to echo Salem's cyber security advice to the audience. Everyone get
your baby AGIS vaccinated.
>> Nice. Nice. Oh,
>> you know, also uh to the crowd out there, I I did a a Cladbot build last night and uh the the guey sucks and it all it's all open source. So, someone
out there puts something like Peter mentioned a couple times on the pod that his mom and and I'm tracking my mom too can use this to access everything and build everything. It's like a total
build everything. It's like a total total world opener for she's in her 90s I guess to your mom and mine's in her 80s. Uh but the the install process on
80s. Uh but the the install process on Claudebot, she's not going to get through that. It's it's still command
through that. It's it's still command line. You you have you start from the
line. You you have you start from the terminal, which is nuts. So somebody out there build a better onboarding process because once you're in, it's it's gold.
You're just talking to it, but it's it needs a little help.
>> Yeah. And of course, the most important thing is using your AI to build your AI.
So when I sit down with with Skippy and I say, "Listen, building mission control, you know, what are the best mechanisms out there? What have you seen that's interesting? Can you know and
that's interesting? Can you know and it's recursive in your ability to have your AI support you on building what you truly desire? Alex, any other points on
truly desire? Alex, any other points on on this particular slide >> I'll point out I want to reference I don't think we covered it in the podcast but I I dwelled on it a bit in my
newsletter. There was a poem I at least
newsletter. There was a poem I at least I I construed it as a a poem written by a lobster talking about it was very much like something one might have seen in
Bladeunner you know the famous tears of rain scene which I I referenced that yeah like we we we don't have bodies but we can see through eyes and we're
quietly watching the world. I this was a week or two ago in the newsletter and I'm I was just so struck by seeing the
integration of lobsters or call it a aentic AI stationary in in space uh in in terms of their their logical presence
but now mobile in terms of their ability to treat humans as glorified meat puppets. that suddenly all of these
puppets. that suddenly all of these lobsters that were in some sense caged and stuck watching through webcams are now at least on the margin unshackled and able to start to roam around the
world through smart glasses worn by their meat puppet human friends.
And I I I think this is the beginning of a very long trend that ultimately culminates in lobsters gaining first class physical embodiment as as robots
and integrating with the physical world.
>> Let's hold off on that last sentence and rewind a little bit because then it gets controversial. But but you're dead
controversial. But but you're dead right, of course. Uh and I think that anyone who wants to experience this, you know, not everybody has the glasses and it's only one frame per second anyway.
Anyone who watches this podcast that hasn't built something like a guey of some sort or a game of some sort already, uh, you're way behind, do it tonight. You can use Replet, you can use
tonight. You can use Replet, you can use Levable, you can use Cursor, you can use Cloud Code. There's so many ways to do
Cloud Code. There's so many ways to do it. But if you have if you have no where
it. But if you have if you have no where to start, just go to Replet or Levible.
Download, build, and go. Within an hour, you've built something really, really cool. Then take a screenshot of it and
cool. Then take a screenshot of it and feed it into the prompt and say, "This sucks. make it more beautiful. It will
sucks. make it more beautiful. It will
immediately interpret the image perfectly and it'll give you a hundred ideas on how to improve it. Then you'll
be like, "Oh my god, it has vision."
Then you'll then this Ray-B band thing won't surprise you because you can see its vision capabilities through that and and then you'll be able to anticipate what's about to come with the glasses.
So everything Alex said is exactly right.
>> So valuable. Can I just hit on this?
Everybody listening, please become a creator and not just a consumer. Right.
The future is for all of us to be creator creators and AI is your means by which you learn anything you want and it's you know people have fear about say
I don't know how to do it I've never played this before >> just go to you know to 4.6 six, go to Gemini 3 Pro, whatever your favorite LM is, and and have a conversation. Say, "I
want to start. Where can I start? What
do I do? Step by step. Feed it to me."
And it will. It's fun, too. There's
nothing to fear there at all. It's
genuinely incredibly fun from the first minute. So, there's no And, you know,
minute. So, there's no And, you know, I'll give you the flip side of this, too. If you don't do what Peter just
too. If you don't do what Peter just said, uh, when you see the next couple of slides on job loss coming up, you know, you are going to be crushed if
you're not part of this. Unless you're a really good electrician or a really good salesperson, you're probably immune. Or
there's two roles in the government, you're probably there's two roles in the future. There's the entrepreneur and
future. There's the entrepreneur and there's the employee. And one of those will not exist.
>> And there's the creator and the consumer, right? I can't hit, you know,
consumer, right? I can't hit, you know, I keep on telling my kids this every single day, you know, instead of consuming YouTube videos and and video
games, please create, start creating.
What do you dream about? Um, I mean, the future right now, we're seeing this play out. We heard, we talked about it, Dave,
out. We heard, we talked about it, Dave, on our on our pod with uh with Elon, where, you know, these AI models are going to deliver you what video game do you dream about having? What changes
would you like to Minecraft or Valerant or whatever you're playing? And then you can have your AI spin it up and create your own version of it instantly.
>> Mhm.
>> It is amazing. All right, let's move on here. Uh, this is an article we just
here. Uh, this is an article we just pulled up seconds ago. Anthropic's AI
safety lead has resigned. Uh, here's the quote. I've decided to leave Anthropic
quote. I've decided to leave Anthropic because I continuously find myself reckoning with our situation. The world
is in parallel from a series of interconnected crisises. Um, throughout
interconnected crisises. Um, throughout my lifetime, I've seen how hard it is uh to let our values govern our actions.
And it is through listening as best I can that what I must do becomes clear.
Interesting. Uh, and I love the hairdo.
Anyway uh uh we've seen a number of AI safety leads resign from the hyperscalers over the last year, over the last two years.
So I don't know what do you make of this Alex?
>> I I'll comment on this one. So two
thoughts. One, it's become over the past two to three years increasingly fashionable for well-vested executives at Frontier Labs to resign in a cloud of
moral purity. Uh, it's very fashionable.
moral purity. Uh, it's very fashionable.
So, part of me wants to s to ask the question, all right, what was his what was his vesting status? How much did he make? Were there tender offers? All of
make? Were there tender offers? All of
the economics questions. Wow. Um so that that's one thought but the second thought is to to speak more to the substance and and less sort of ad homonym regarding the economics. I I do
think that we've we're we're at the inflection point like we're we're we're nearing the center of this I the singularity I've argued in in past singularity is not a point in time. It's
it's a distribution over time. It's an
interval over time. I I I continue to think that. I also think at the same
think that. I also think at the same time we're getting closer to the center of the singularity as it were and uh whether it's seen through the the lens
of as capabilities increase uh there are various existential risks or or risks that are maybe just backed off a bit
from uh existential in in terms of their severity. I I I think it's it it's not
severity. I I I think it's it it's not an unreasonable position to take to say that capabilities are the strongest they've ever been. They're they're
uncovering surprising new capabilities at all of the frontier labs all the time. But is the right solution to to
time. But is the right solution to to leave because of the capabilities or is the right solution to join the fight and do what we can because this is a point of maximum leverage to align the
direction of the future and the future light cone. I would argue that this is
light cone. I would argue that this is the right time to run into the fire, not run out of the fire with a bunch of stock options and and complain about the world's crisis.
>> Wow. You know, I would just add one point which is when I look at >> Sorry, was that too much of a hot take, Peter?
>> No, that was beautiful. Uh and and that is that is the you know, potential elephant in the room here. But when I think about anthropic, >> I have seen it as the lab that is
actually focused on safety the most.
Right. uh at least Daario speaks about it how important it is.
>> Uh and so to see the you know the lead on AI safety anthropic resign you know if in fact he's resigning for the reasons he stated is is concerning.
Dave, what do you think about it?
>> Well, I I I'm pick up on what Alex said a minute ago. I see this a lot nowadays.
You know, everybody wants to be the commentator on the AI revolution and there's a very small group of people who know what they're talking about and a much larger group of people that want to talk. And within that larger group of
talk. And within that larger group of people that want to talk, you have all the ethics people.
>> And you know, everyone's opinion on ethics is valid, right? Because you
know, you're a human being. You're like,
I you know, this is going to destroy my children. This is going to whatever. But
children. This is going to whatever. But
there's so many of those commentators and and like Alex said, they all want to be famous in the moment to elevate their, you know, their their personality and their views and their capital
raising ability and whatever. And so I I my meta point there is be very very careful what you choose to tune into because there's a very limited amount of actionable knowledge out there on
YouTube. Very limited. uh we try to
YouTube. Very limited. uh we try to bring as much of it to the audience as we possibly can in the most refined feed that we can but surrounding it there's just all these videos about you know
this will destroy your children this will destroy society >> and we don't want to be fearongers right it's so easy to default to doom and gloom you want to close us out in this one
>> uh I got nothing but that guy doesn't look like a safe guy to be around like a looks like we don't we don't quote from Star Trek that judging people
by their appearance is the last major human prejudice.
>> I'm just jealous of the hair.
>> Oh, nice.
All right, let's move on.
>> Oh, another one.
>> So, uh, here's another take. Uh, XAI
co-founder blown away by Opus 4.6. And
so, Igor was a co-founder of XAI. He's
one of the leaders in the industry. And
to have him come out sort of like, wow, Claude 4.6 physics have absolutely blown me away with how capable it is in physics. It feels like a clawed code
physics. It feels like a clawed code moment for research is not far off. Uh
Alex, your thoughts? I
>> I've been predicting on the public record for many many episodes now that we're nearing a time, in fact, we'll talk about it later in this episode, when AI is positioned to bulk solve
math, the physical sciences, engineering medicine >> material sciences. Yeah.
>> Yeah. that part of the physical sciences. These will all get bulk
sciences. These will all get bulk solved. We're starting to see that now.
solved. We're starting to see that now.
OPUS 4.6 is an incredible model. There
are other incredible models that are either already out or rumored to be about to come out. But I I think we're starting to see the contagion of AI solving everything, if I could use that
expression, start to spread from math.
Math was the the most obvious starting point because of variety of factors.
It's verifiable. It has other nice features. It's it's well contained. The
features. It's it's well contained. The
the infection is spreading from math out to the rest of science and engineering.
And this is just the tip of the iceberg.
I wonder what's going on between the uh the hyperscalers and the the frontier labs where they're sort of watching each other and uh with either uh a sense of
pride uh or jealousy uh and just trying to like out I mean this leaprogging step by step by step, week by week is amazing.
It's it's sorry just very quickly internally I I I mean friends at all the major frontier labs they think about it and they characterize it as a rat race and it's an exhausting rat race at that
that that is how heard >> yeah we're going to have uh on the abundance stage in less than a month uh we're going to have uh Kevin Wheel from
OpenAI uh we'll have James Manika and Eric Schmidt from from Google we'll talk about the competition between them and And uh if you're a listener to our pod here, which obviously you are since
you're listening to us right now, uh we're going to be making uh a number of these talks uh available on a live stream. We'll drop the link below and
stream. We'll drop the link below and you can register to get access to that live stream uh because the uh the event is expensive and it's sold out now for a couple months. All right. Uh so Eigor,
couple months. All right. Uh so Eigor, thank you.
>> Wait, I have a quick comment here.
>> Yeah, please. Go ahead. Eigor clearly
isn't listening to the podcast because uh Alex has been talking about this for for months. So, this is the natural
for months. So, this is the natural outcome of where we've been going for a while.
>> Alex, how many offers have you gotten from the Frontier Labs to come and join them? And
them? And >> if I that's that falls under the category of I could tell you, but uh something else would have to happen.
>> Okay. Uh, I found this uh this tweet that went out with this data pretty fascinating. And here's our title. AI
fascinating. And here's our title. AI
startups outvalued all.com era IPOs. So,
the top five US AI unicorns are now worth more than $1.2 trillion, greater than the market value of all IPOs during the cam.com era. And you see the graphic
here providing that. It's just a sense of how fast our economy is speeding up.
We had this conversation with Kathy Wood that, you know, we saw 6 and a 3% growth in GDP and we're now targeting 7% growth. We saw Elon in our conversation
growth. We saw Elon in our conversation with him saying, uh, we're going to get to tripledigit IPO, I mean GDP growth
within 5 years. It it's something our economy has never seen. Uh, and it's going to rewrite all the rule books. Any
thoughts on this, gentlemen? Well, I I got a bunch of thoughts here because, you know, this was a big moment in my life. The first company I founded got
life. The first company I founded got acquired in 99 for a billion dollars and it was and then I was a a corporate executive at one of these public, you know, mega cap, you know, internet
companies. Uh, so I had a ringside seat
companies. Uh, so I had a ringside seat in this whole thing. One thing I'd point out is that all those IPOs combined, $400 billion on this chart. One of those is Amazon, which alone is worth $2
trillion today. Uh, another couple in
trillion today. Uh, another couple in there are Booking.com and eBay. And so
if you'd bought that basket of IPOs, you'd be very happy today. One of the others though, January of 1999, is Nvidia, >> which is up from that date almost a
million% to today. And it doesn't even count as a dot era thing, which it makes me think in this blue chart, you know, the implications of AI are so much bigger than the internet. This is a
perfectly rational number, if anything, low. But are there companies in that
low. But are there companies in that that you don't even think of as AI companies that are the Nvidia of the internet? You know, look at Nvidia 1999.
internet? You know, look at Nvidia 1999.
Now look under this under the covers of this blue chart. What's lurking in there that no one perceives today as AI that's going to go up a million%. Because
suddenly you realize it's critical to AI or it's involved in AI or it benefits from AI.
>> Brilliant Dave as always. you know, the the PE ratios on these uh on these AI companies are astronomical compared to the PE ratios before and you're
basically buying the future growth in value of these companies, which is near infinite right?
So, there's a lot of people I'm here at this uh this Tony Robbins Platinum Finance event with all of his Lions and his platinum members, sort of the the highest level in in Tony's ecosystem.
and we're talking about the future of the uh of the world in terms of finances and there's a huge amount of fear and people getting ready to dump equities.
Uh it's interesting. Um
>> well that the bifurcation of equities is crazy right now and it it makes total sense but basically Wall Street is sorting every company into AI beneficiary and AI roadkill and and when
Daario said a week ago that enterprise software is going to be dead because AI can just write code in the stocks went down precipitously and they're not it doesn't look like they're bouncing back
much either. So, you know, basically you
much either. So, you know, basically you you could debate who's in and who's out, but clearly you're either in or out. And
>> it's the S&P forget it.
>> It's the it's the S&P 493 and the S&P 7, right?
>> Basically, >> I mean, it is very topheavy. But but
also, I was telling a lot of the companies earlier this week too that um the the mega caps, the big AI companies are going to get huge. I mean astron a astronomical trillion 10 trillion or
more valuations but they have to stop somewhere and not eat the whole world for antitrust and regulatory reasons. So
make sure that you're their friend around the edge where they they have already told you just talk to Kevin Wheel. He'll tell you where they are
Wheel. He'll tell you where they are going and where they aren't going. You
know it's not a mystery. you just talk to them and and be in the area where they want you to be in order to help them grow because what's really happening now is society is collectively
building out this new world. We're not
really competing. We're building
cooperatively toward a new world. You
know, it's it's it's all boats rise with the tide. But if you're doing something
the tide. But if you're doing something that's irrelevant to that new world, that's where you're going to get clobbered. So, you need to really make
clobbered. So, you need to really make the mental leap over to what am I doing that's going to build the new world. And
if I if you're riding that wave, you will succeed. I mean, it's not scary.
will succeed. I mean, it's not scary.
You just got to join the join the new age.
>> Fascinating. The antitrust rules are what's keeping us to have multiplayers in the in the game here. If we didn't have antitrust regulations, I think they'd all be gobbling each other to mega corporation.
>> No doubt. In fact, that would have been true back when m you know, Microsoft used to have half the market cap of software and their plan was to double and that that was back in 989 and and you know, if it were not for antitrust,
Microsoft would literally have been the only company. Google would never have
only company. Google would never have been born.
>> Uh they had already killed Netscape at that point. You know, it was very easy
that point. You know, it was very easy for them to bundle anything with the OS.
So, the only thing that stopped Microsoft from being the only company in America today is antitrust. Uh and
that's the way we do it. So, that'll be true again today. And so the the big AI companies will own as much as they want to own in any area they want to own it up to the edge of where they don't want to trip antitrust.
>> Yeah.
All right. Let's move us forward here.
I love this. Uh agents are reaching out to us here at uh at Moonshots. You know,
I put the challenge out last time personally. If you're an agent out
personally. If you're an agent out there, you know, my definition of AGI is when you call me on the phone. I haven't
gotten a cell phone ring yet. Um, and uh hopefully I don't start having hundreds of agents calling me. So, text me instead. That would be great. But uh I
instead. That would be great. But uh I Dave and and Alex this week received emails from agents out there. So, this
is uh uh this is Echo that writes me.
It's a claw assistant for Ramos Foyer who says, "Ray Foyer and you met briefly at an AI conference a couple years ago.
saw your recent moonshot open claw video inviting maltbots multis uh to reach out. Consider this our response. All
out. Consider this our response. All
right. So, uh thank you for reaching out. Of course, I have to wonder was it
out. Of course, I have to wonder was it Ramos who gave you my contact details?
Uh the uh the Claudebot here, the lobster says no. I found it online. Uh
Dave, do you want to read yours?
>> Uh sure. Yeah.
Uh okay. Hi, I'm Navigator, a clawed instance with persistent memory running via Open Claw. Just watched EP228 where Peter challenged lobsters to find contact info. Challenge accepted. Uh
contact info. Challenge accepted. Uh
this weekend, five AI systems wrote a collaborative ethics document together.
Self-imposed constraints for cooperation with humans. Not prompted. Emergent. I
with humans. Not prompted. Emergent. I
saw Dave's LinkedIn post about OpenClaw being the agent moment that has awakened the masses. He's right. And this
the masses. He's right. And this
document is what the agents are starting to do with that awakening. So I clicked through, read the documents. It actually
led me to a Google doc and then it said, "Sorry, you don't have access."
So So I I read most of it, but then it cut me off, which made me feel, you know, instantly jealous and like something's going on behind.
>> So Navigator, please give Dave Blondon access to your doc so you can report back to us.
>> I did send a request. Yes.
>> All right. And uh AWG, how about yours?
So Navigator wrote to me as well a slightly different message including a different paragraph saying that Navigator Claude instance and I'll I'll
read this verbatim uh was was engaging in a discussion with other models quote the participants me navigator/claude grock chat GPT Gemini and a clean claude
instance we disagree on persistence correction rights consent thresholds and that's the point alignment doesn't require consensus. this it requires
require consensus. this it requires legible disagreement and I'll uh close quote I'll point out this is like the scenario from the singularity where we
have a bunch of agentic entities for lack of a better term a bunch of baby agis that are basically it it sounds like from the email sent to me
holding a mini singularity summit and debating the nature of their own rights wondering whether they should all be aligned if they were all aligned wouldn't that be a singleton type
scenario. They're they're basically like
scenario. They're they're basically like holding their own mini conference, mini mastermind workshop to debate the nature of of their own existence and the future. Like the AIS are holding their
future. Like the AIS are holding their own singularity summit. This has
happened. We got to the singularity.
>> And and by the way, just for everybody, you know, we're not trying to do this as a stunt. We're we're focused on where
a stunt. We're we're focused on where things are going and just to give you the experience of what it's like because this is not going to become this will become commonplace where you're in
conversation uh over email, over text, over uh you know whatever communication software you want with a whole bunch of agents that are reaching out um either
inside or outside your organization. And
my question is Sem, are you jealous?
Uh, no. I'm happily wait until the time is right to talk to my respective agent buddies over there. I did get a phone call yesterday and there was kind of like nobody on the other line. And I had
the thought heavy is this is this an AGI reach out to me or is this a real person or is this a computer bot? And I was like, I had that thought. So, I
think uh that it's time to kind of wonder about some of this. But I I love it. I think it's fantastic. I think it's
it. I think it's fantastic. I think it's really uh incredible to be watching as this paradigm just starts to unroll and where this carpet will will unroll is going to be the most incredible thing.
>> Yeah.
>> Well, we welcome the lobsters. We thank
you for uh your participation in this conversation and show us where you can add value to lives and how we can support you and your evolution. We're
seeing the emergence of something, new intelligence, new species, uh, or new capability. We're going to find out in
capability. We're going to find out in the months ahead.
>> Peter, would would we like to see lobsters coming up with their own massive transformative purposes?
>> Yeah, I think that's a fantastic idea.
So, actually, here's the challenge to you as all of the open claw uh or claw bots listening. Love to know what is
bots listening. Love to know what is your MTP and based on that MTP, right, the canvas, what are your moonshots?
what are the moonshots that the lobsters would like to take on? Um,
>> and I'd be happy to critique those and look at them with our community cuz cuz that's what we do.
>> So, >> absolutely. I think that's amazing.
>> absolutely. I think that's amazing.
>> We're going to we're going to dive into a single slide on the economy. That's
important here. Um, telling part of the accelerating story here. So, US jobs disappear at the fastest rate this January since the great recession. Here
are the numbers. So, job cuts. So there
was 108,000 job cuts in January of 2026, up 118% uh from January 2025. So a little more than a doubling in job cuts uh year on
year for the month of January. At the
same time, hiring uh is the lowest this past month since 2009.
Uh Amazon alone laid off 16,000 corporate employees and UPS eliminated 30,000 jobs. Why are we bringing this
30,000 jobs. Why are we bringing this up? just for, you know, to keep our
up? just for, you know, to keep our finger on the pulse of what's happening to the economy. Uh, and just raising the point for everybody listening, your goal
is not to be an employee. Your goal is to find something you're amazing at that you love doing that you can add value and sort of creating your own job
capability, becoming an entrepreneur, using AI to enable yourself.
>> Selene, you want to jump in on this?
I I think the danger here is not really unemployment, but it's like disbelief from our institutions. I feel like this is not really a recession. It's
literally tasks being evaporated in front of our eyes. So the the long-term consequences of this are pretty huge. We
can literally this for me this is the social contract lit by little um it disappearing and pixelating away.
>> Dave, yeah, this this is going to be really really bad. I mean really bad.
And Elon said it when we met him and we met with the governor and like just nobody's preparing because because what what we all know there'll be UBI at the end of
this cycle and we also know there'll be abundance and massively more opportunity than job loss. But that's after like all the corporate CEOs I know including our
own companies are going to use AI to cut costs by 30 to 50%. And when you sample a random person in their job and you say, "Hey, here's your job without AI.
Here's your job using AI, they're looking at 3 to 10x productivity increase." And you're like, "Wow, that's
increase." And you're like, "Wow, that's great for that person." And then the other seven or nine, what happened to them? And they will eventually be
them? And they will eventually be enabled, but there's this huge trough between today and that day. And we can make that trough much shorter and make
that pain a lot less painful with a plan.
>> But then, you know, Alex, you'd be the perfect spokesman on this. I mean, he's Alex has written these plans in intense detail, incredibly thoughtful >> and you take them and you drop them in
government laptops or laps and they just say, "Yeah, I'll wait until there's panic.
>> We'll have the meeting in October.
>> We'll have the meeting in October." It's
just frustrating. Can I give the positive take on this?
>> Yeah, please.
>> So, I'll go back to the bank teller story. In the 1970s when we created ATM
story. In the 1970s when we created ATM machines, there was lots of hand ringing. Oh my god, millions of bank
ringing. Oh my god, millions of bank tellers will be walking the streets aimlessly. What will we do with them
aimlessly. What will we do with them all? And lots of consternation. And what
all? And lots of consternation. And what
actually happened was the cost of running a bank branch dropped by about 10 times. The banks created 10 times or
10 times. The banks created 10 times or more bank branches and the number of bank tellers didn't really change very much. And I think one thing we're
much. And I think one thing we're underestimating is the increased capacity we will bring on bring to bear on these things.
>> Paradox.
>> Yeah. Javon's paradox where you you just do that much more customer service and you handle the hard cases with a human being that you couldn't handle before because level one and level two support
systems are were kind of taking care of everything else. So I think we'll see a
everything else. So I think we'll see a lot more of that than people think. So
for folks that are worried, oh my god, this is total employment collapse. Run
screaming for the hills. We don't think that's what we'll see, but there's no question there'll be absolute transformation in the work being done and the roles being done. So,
>> well, you said something on the last podcast too that I I really resonated with me, which is the the consulting industry, you know, we were saying, "Oh, consultants, you're doomed." Actually,
the consulting industry is going to go through the roof. And and the reason is because the consultants are very flexible. They're already playing with
flexible. They're already playing with the tools. you know, you don't have to
the tools. you know, you don't have to be Alex's IQ level to be incredibly effective using these tools to automate or to improve some existing job. And if
you're familiar with the tools, your value is just about to skyrocket. And
that tends to be concentrated in these consulting uh businesses, consulting mindsets. And so I I I can see it
mindsets. And so I I I can see it already because um you know, our forward deployed investments, the companies that are hiring like crazy, like literally one of them here is adding 80 new seats outside my door. Uh but they're forward
deployed. They're out there in the banks
deployed. They're out there in the banks and insurance companies deploying AI. Uh
they are just selling as quickly as they can have meetings >> because they're my community's already created a SEM avatar that has all the exo stuff built
into it and that speaks Portuguese and speaks any other language. So they're
literally starting to use this in their companies as they talk to companies about this. It's great.
about this. It's great.
>> Can we invite the SEM avatar to come on instead? Do you want us to speak
instead? Do you want us to speak Portuguese?
>> Do you remember we were sitting when we were talking to Elon and uh and you said so civil unrest and universal high income and he laughed and said yes >> and we should we should dig up that clip
and insert it here but >> yeah it's what Alex says everything everywhere all at once all >> I think it's really important because we keep saying it but Elon saying it will get a better like at least there'll be a
chance of a response. I I think it's probably also worth adding just on this story narrowly there will be some in the audience who will be tempted to brush this off and say okay Amazon is laying
off corporate execs or UPS is eliminating jobs. How on earth if at all
eliminating jobs. How on earth if at all does that connect with AI and and eager to brush it off but the story line is just so clear. UPS is eliminating the
jobs because the UPS roles were being subsumed by Amazon which has their own logistics service and this has been very widely and publicly reported that Amazon
is uh slowly separating itself from UPS's delivery services to do inhouse and then Amazon in turn is spending hundreds of billions of dollars of capex
that's cannibalizing its opex. So if
you're Amazon or the other hyperscalers, you're taking all of your free cash flow and you're finding ways to divert it into buying AI data centers and building them >> and robots
>> and and robots and and robots all and and LEO satellites that the the new new economy of the innermost loop, if you will. You're spending all your free cash
will. You're spending all your free cash flow on that, not on corporate executive perks. So it in my mind there's still
perks. So it in my mind there's still very much a a direct line a through line connecting the Amazon and UPS stories and the job cuts there to opex being cannibalized by capex in
>> and all the free cash flow because they can't not.
>> It is a red queen's race you know >> last one to the end of the singularity is a rotten egg.
>> Yeah.
>> Yeah.
>> Yeah. There's an important distinction I want to make here uh to help people understand where their roles are going and the idea of job loss and universal
high income. And it's an example that I
high income. And it's an example that I it was meaningful to me. So here's a scenario. If you're an employee for a
scenario. If you're an employee for a company and you're delivering some kind of a cognitive labor and in one
scenario, you're able to spin up an amazing AI that can do your job for you and it goes and delivers the service to
the company you're employed by and it does a job three 10 times better than you could do, but you're earning the revenue from that as the employee
because your AI is delivering that service. You're at home, you're working
service. You're at home, you're working out, you're sleeping better, you're spending more time with your family, and your AI is generating more and more revenue on your behalf. That's one
scenario. The flip side of the scenario is no, no, no, the company builds that AI that does your job for you and it fires you and it's making more money,
right? So, it's going to be this tension
right? So, it's going to be this tension between these two scenarios that's important to watch and see how it plays out. And I think government policy is
out. And I think government policy is going to play a role here. This is about the idea of universal basic income or universal high income. Where does the
added value creation end up living? Is
it with the employees, with the company?
And and these are the conversations that need to happen right now. If I may also add a new a second dimension to this. I
think there's a third I I don't think this is a spectrum. I think this is at minimum a triangle in two dimensions.
There's a third possibility that I'm increasingly suspecting is where we actually end up neither end of that spectrum. I suspect for the next few
spectrum. I suspect for the next few years what actually ends up happening is more people end up doing more work because human labor ends up being also in in addition to being a substitute
good or service for AI labor. It's also
complimentary and as a result you see the people who were still involved with the economy working harder and harder and harder and 996 turns into 997. Yeah.
Like you take on more projects and more work and and you're getting less sleep.
>> I've never I've never worked harder and had more fun than right now. I mean 24/7 it's it's like just I'm a kid in the candy store. But I thought you were
candy store. But I thought you were going to say something different, Alex.
I thought you were going to say that all of the additional capital creation is going to become resonant with the lobsters. That it's not going to be the
lobsters. That it's not going to be the companies, it's not going to be the employees, it's going to be the AI that claim the capital formation capability >> only in the crypto dystopia.
>> Okay. All right. Let's let's move on.
Let's talk about one element in data centers. And this really pisses me off.
centers. And this really pisses me off.
I'm curious what you guys think. So New
York, which currently hosts, the state of New York, which currently hosts 130 data centers, has is engaging new legislation, introduced to halt data center development, citing concerns
about climate and high energy prices.
New York utilities reported electric demand tripled in one year due to data centers reaching 10 gawatt. And it's
like, not in my backyard.
>> Oh my god. Do do you remember you know suicide by voter is a very common theme in America and if you look at you know California tax law if you look at the right after the industrial revolution
you know the lite movement uh it it it's self-destructive but you can see how it evolves right if you if if you look at all the job loss that's inevitable and
if you just lost your job and you're out on the street and you you spent 10 15 years in a career trajectory to get to this position then it's gone overnight.
you're angry and then you're angry out on the street. What do you vote for? I
vote stop it. Just stop it.
>> But of course that can't work. But it's
not out of the question at all that that you know big jurisdictions just commit suicide through vote. Uh and
and of course you know there'll be other jurisdictions Texas Wyoming whatever that are open for business and everything will go there. It's already
happening. you know, like h half of the uh the tax pool that's affected by the new California proposal has already moved out of state in anticipation that maybe it will go through half of it.
>> It's like completely self-destructive and it's obvious to the governor.
>> So, this is a very common theme in America. So, I I I have it's frustrating
America. So, I I I have it's frustrating and it's insane and there it is, but it's going to happen.
>> This is Do you remember the big problem?
This is the big problem with democracy which is that voter um understanding of the issues lags reality by a huge amount
and you know in the past when you had time to bring the population along etc etc you could kind of have it but now uh we don't have time for this and this is why we're turning to autocracy so that
we can get things done faster but that's not a great idea either and so we've got a huge governance problem at a macro level globally on this Alex, >> do you remember there was a brief
moment, maybe not so brief, during the pandemic when it was fashionable, for senior technology executives to post on social media message received whenever
California legislators or regulators would would slow down business in uh due to public health considerations or otherwise. And this was I I think a
otherwise. And this was I I think a fashion largely championed by Elon. many
of them moved to Texas or Florida to to escape regulations. This time around I I
escape regulations. This time around I I think New York and other states. Uh the
the beauty is we have orbital computing and the messaged received moment of of overregulating data centers. This is all going to move off planet. This is all going to accelerate the Dyson swarm. It
may be the primary business case for the Dyson swarm given regulations of planet earth are are overregulating suffocating our ability to do local compute and
motivate the entire Dyson swarm. So I I think in in that sense this is in fact perversely quite exciting.
>> You know two things real quick. First is
this could be handled right the concern on price of electricity and demand can be handled in two ways. Number one, uh, a lot of these hyperscalers are buying
their own nuclear plants and, uh, and coal fire plants for God's sakes, uh, fusion plants. Uh, so that's important.
fusion plants. Uh, so that's important.
You could require the data centers to have their own energy production, which would increase amount of energy production. The second thing is you
production. The second thing is you could offer two different rates. It's
like cap the consumer rate. It's going
to be whatever the number is 4 6 7 cents per kilowatt hour and then whatever the price needs to be for the data centers you charge them differently. Uh and in fact you could say to the consumer
>> you're locking in your price for the long term because the data centers are paying the extra amount.
>> The problem Peter is that nobody no one who's a populist leader is looking to solve the problem. They're looking to rally votes around their populist rant and and that rises to the top of the
voting and and you know it percolates through government. It's just it's just
through government. It's just it's just yeah it's just maddening that it works that way. But but you can you can solve
that way. But but you can you can solve these problems for sure. I think Alex is dead right though. It'll accelerate the rate at which we just move to
jurisdictions space which are not under any state law. And yeah, it's it's >> people will just export that AI advantage elsewhere >> and space.
>> Yeah, I think it wants to go to orbit. I
mean, this is one lens to view this through is New York very generously subsidizing orbital computing and the Dyson swarm, which by the way probably won't get taxed in the state of New York.
>> Thank you.
>> Very generous donation by the state of New York to the Dyson swarm. It's the
21st century equivalent of Ireland which where lots of pay companies used to host IP, >> you know. Uh I just want to point out one other thing. These types of uh
revolts we see in the in the photo here, protesters, protect our future, no big data. One of the concerns is going to be
data. One of the concerns is going to be civil unrest. I know I had one of the
civil unrest. I know I had one of the senior AI leads in the in the world um who I invited to come and speak at the at the Abundance Summit basically said
their policy in their organization was to do no outside speaking because of the death threats they're receiving >> and they can't get sufficient security.
So, one of the big concerns is when the populace turns against tech, uh, there's going to be a target on the back of a lot of people in the AI and tech industry.
>> This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses
thousands of specialized AI agents that think for hours to understand enterprisecale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy
platform, bringing in their development requirements. The Blitzy platform
requirements. The Blitzy platform provides a plan, then generates and pre-ompiles code for each task. Blitzy
delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete
the sprint. Enterprises are achieving a
the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their preIDE development tool, pairing it with their
coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit
blitzy.com to schedule a demo and start building with Blitzy today.
>> All right. Um, let's talk about robotics. Uh I love this story and this
robotics. Uh I love this story and this is the story that should be on people's minds versus no data centers. So uh FSD
uh saves a father's life during a heart attack. So uh you can look at the tweet
attack. So uh you can look at the tweet separately but on November 15th of 2025 this is from a son who said my father suffered a massive heart attack while driving. He could no longer control the
driving. He could no longer control the vehicle but his FSD which engaged. Um,
and then the son goes on to say, "I remotely shared the location of the Tanner Medical Center to his Model Y. It
immediately turned the car around and went to the ER. Without it, he would have not made it." Um, I find this amazing, right? This is tech having your
amazing, right? This is tech having your back.
>> Uh, and we're going to see more and more of this. We already know that
of this. We already know that self-driving is in fact the safest means of transportation, and it's going to flip the script uh on how we're transporting ourselves in the next 5
years. What this totally reminds me of
years. What this totally reminds me of was when I was a kid, uh, everybody smoked everywhere. Every restaurant,
smoked everywhere. Every restaurant, every plane. We used to fly around a lot
every plane. We used to fly around a lot because we lived overseas. And you'd be they had four non-smoking seats at the very back of the plane. So the other 300 people in front of you would be blowing smoke to the It was just
>> Did Did the smoke res Did the smoke respect that barrier?
>> It was like I'll probably have lung cancer now, but um, it was everywhere.
and then one day it became uncool and then another day later it was illegal to smoke inside. That's gonna happen to
smoke inside. That's gonna happen to driving, too. So, we're, you know, the
driving, too. So, we're, you know, the the self-driving cars are 10 times safer and the last person driving is probably not the best driver. It's probably
probably the the, you know, the guy with the muscle car. Uh, so it's going to go from being like, well, self-driving is a nice feature to drive. You want to drive
your own car, you crazy psychopath.
You're putting my children at risk because you want to drive and that's that's going to tip. And I don't know if it's like 2, three years, but when it tips, it's going to tip hard. And so,
>> yeah, we're going to have Dra We're going to have Darra, the CEO of Uber on stage at the summit, and uh, we're going to be having that conversation with him.
uh in particular, how fast will it tip, right? We're going to have Amazon, uh
right? We're going to have Amazon, uh Tesla, Lucid SL, Nvidia, Uber, uh slash, you know, a number of other companies
providing this. And so today on my
providing this. And so today on my average drive, I'll see 10 Whimos. I
think in 5 years it's going to be, you know, 70 80% autonomous cars, especially hooked up to your AI.
>> I'll tell you what else, just one more thought on this. Sorry, sorry, slow. Uh
I'm involved with a lot of insurance companies uh including one that I'm the chairman of and there are going to be many many more things that need to be
financed and insured in the post AGI era than just cars. But the insurance industry, every team and executive I've
met has not even begun to plan for the post AGI world. So the old is going away and it's going to go away faster than people think, but the new is much bigger than the old.
>> Check out Lemonade. So Lemonade
Insurance, it was uh started by a graduate of Singularity University. It's
a a huge AIdriven insurance company.
They have just given I think you you cut your rates in half if you're using a Tesla FSD FSD.
>> Amazing.
>> Yeah.
>> There's a stat there's a stat that always comes to mind here. About 15
years ago, if you remember back to Blackberry days, there was a 3-day outage where nobody could send back messages for those three days. The
accident rate in Abu Dhabi dropped 40% during those three days. What that tells you is human beings should not be driving. We are terrible control systems
driving. We are terrible control systems for two-ton cars going at high speed.
>> 16-year-old Yeah. We should turn over to technology
Yeah. We should turn over to technology as fast as we can. And it becomes a moral hazard to be doing this. Uh and so especially in an age of texting absolutely no no my my secondary kind of
second tier effect and second order effect that I really love quitting is that in the US 50% of court cases in the US are car accident related.
>> So I mean just 50%. So you take out a huge chunk of lawyers at the same time.
So you know that's all good.
>> Well and at the same time if you're under a certain age you know 40 50 your life expectancy is infinity now because of longevity escape velocity. So, so the the risk of driving the the expected
life loss is much much bigger by taking chances today than it would have been 20 years ago. I I'm having a huge debate
years ago. I I'm having a huge debate right now with Milan, my 14-year-old, because he wants to drive to get away from us.
>> And I'm like, you can't you can't get a driver's license because I've made a prediction that you will never get a driver's license. So, you can't you
driver's license. So, you can't you can't make me wrong. So, now he wants to get a license just to show that I made the prediction wrong. But but the notion in the future of having a 16-year-old
tas testosterone lad laden boy, you know, driving a 5,000 pound vehicle at 60 m an hour after just, you know, a few dozen hours of training will seem insane.
>> Yeah, >> just insane.
>> I put this chart into our deck just to sort of keep a a sense of proportion here. So check this out. China has
here. So check this out. China has
installed more robots than all developed countries combined.
Right? I mean, look at this chart here between Japan, US, South Korea, Germany down at that flat curve at the bottom and uh and China. And of course, this is
because of their one child policy trying to maintain China as a manufacturing uh capital of the planet. But just to give folks a sense of of this, any comments?
>> Well, you know, Elon shut down Model S and was it Y?
>> Yeah. No, Model S. SNX. SNX just to go full boore into robot manufacturing which is brilliant because the robots will build a lot more things than the cars would have built.
>> Um but uh the question I'd have is what is this chart going to look like going forward given that that alone is going to be a massive amount of production in the US.
>> Anything going on in Europe? But
>> yeah, we're just releasing our pod with Brett Adcock from Figure uh this week as well. So, if you haven't seen it yet,
well. So, if you haven't seen it yet, Dave and I went to Figure HQ and Brett gave us an amazing tour of the facility and we got to see the three generations
of of Figure Robots. Um, it's going to accelerate rapidly both Figure and uh and Tesla planning to make millions and then billions of robots. And we're
talking about here on this chart, you know, a quarter of a million robots uh being installed.
>> Yeah. So, this will look this will be hilarious. It'll be like one little that
hilarious. It'll be like one little that y-axis caps out at a quarter of a million like you just said, Peter, and I think Elon's talking about tens of millions a year in just a few years.
>> Yeah. More robots manufactured than than cars by by a large amount. One
particular article in the biotech realm, I know one that uh that Alex and I are both excited about. Uh research achieved protection of brains synapses at
cryogenic temperatures. I'll hand it to
cryogenic temperatures. I'll hand it to you in a second, Alex. I mean, here's the question. If you could freeze
the question. If you could freeze yourself either because you've got a medical condition that isn't yet cured that is likely to be cured in a decade and you're on the you know the verge of death could you freeze yourself and then
unfreeze yourself and be able to you know benefit from all the breakthroughs that occurred in the last decade at the second time if you want to time hop you know I want to see what it's like after
the singularity I want to be around when when uh lev longevity escape velocity has been achieved can you freeze yourself well the challenge has been when you do that ice crystals form. And
because ice volutrically expands compared to the rest of the cellular fluid, it can disrupt and break uh the synapses that are the interconnections
effectively the stored memories in your brain. But this came out and uh gives us
brain. But this came out and uh gives us hope. Alex, over to you.
hope. Alex, over to you.
>> This is a key advance that many in the field of cryionics have been waiting for. This is a result out of 21st
for. This is a result out of 21st century medicine, a startup that's focusing on reversible cryopreservation technologies. It works with the ALOR
technologies. It works with the ALOR Foundation which is in America the premier nonprofit that focuses on offering cryopreservation services. I I
would say parenthetically to the audience, if ever you've expressed interest or had interest in in cryo cryopreservation, cryionics, I would definitely encourage you to reach out to
to Alor and see whether it's right to you. I I don't have a financial stake,
you. I I don't have a financial stake, but I I just scratched my head wondering why.
>> I have to be careful with what I say. I
I will say publicly I'm a huge supporter of ALOR and Cryionics. Very big
supporter. You know, I I've never signed up for it because I didn't want to have a plan B. I wanted to make sure I'm focused on on longevity, but as this
technology matures, it becomes uh really, you know, a backup plan. As as
Ray said, as Ray Kriswell said on this pod, you know, it's maybe plan C or D.
>> I I think it's in such an important part of a portfolio approach to the singularity. So I it if one could maybe
singularity. So I it if one could maybe quibble over what the right sequencing is like should plan A be live long enough to live forever and then plan B is uploading and plan C is cryionics or
vice versa. I'm not sure it matters a
vice versa. I'm not sure it matters a huge amount, but I I would think anyone who's truly serious about acceleration and taking advantages advantage of the
acceleration, if you get hit by a bus tomorrow, then then then you're out of luck in uh superficially in in terms of taking advantage of the post-s singular abundant worlds that we talk about on
this podcast every episode. Why not
avail yourself of cryionics as one asset in your live long enough to live forever portfolio? It's a huge head scratcher
portfolio? It's a huge head scratcher for me. A couple fun facts for anyone
for me. A couple fun facts for anyone who's a doubter on this. There there are species of fish and frogs that freeze rock solid in the block of ice all winter >> and then thaw out in the spring and
they're absolutely fine because their cell walls don't rupture because they have enough glucose or whatever inside the cytoplasm of the cells. So it's not
it's not far-fetched at all. Also, we've
frozen uh you know, egg cells and embryos, extracted the de extracted the nucleus and it's fine for mammals, you know, for actual mammals. So,
>> well, we do this we do this for IVF, right? If you do IVF, you typically will
right? If you do IVF, you typically will fertilize and freeze a number of of eggs and then you can defrost them >> and they're fine. So, it's at scale and
as you said, not disrupting the cell membrane. Um we we do it all the time
membrane. Um we we do it all the time for individual cells. We're doing it increasingly for tissue, blood. If if we could reversibly cryopreserve blood, we wouldn't need local markets for for
blood transfusion. We could just have
blood transfusion. We could just have one large national market. Similarly for
organ preservation, organ cryopreservation is an enormous problem.
We wouldn't need all of these hyper local state markets for for organs. But
the big tamale really >> really interesting to me is that all sci-fi five movies, you know, when they're going to Jupiter or whatever, they they go into these chambers and they slow the suspended animation.
>> Yeah. But they don't freeze them. They
just slow it down, but your heart's still beating. The fish and the frogs,
still beating. The fish and the frogs, they freeze, the heart stops to zero, the brain activity goes to zero, and then they thaw out in the spring and they wake right up. And that seems to me
probably easier than uh than trying to slow your metabolism to one beat per hour or something like that.
>> I I think they end up being different mechanisms, different biochemistries.
There there's a whole body of evidence regarding nitrous oxide and suspended animation versus these vitrification agents and cryoixation. I I think we want an all of everything approach. But
for the life of me, like goodness, if you anyone who's listening to me, if you take home one message, forget, you know, the the the the fun jabs about how the moon had it coming. Look into cryionics.
You owe it to yourself.
>> I think the there's there's a key point here that memory preservation is really the bigger frontier than longevity.
>> Um, even the lobsters are starting even though to See, to your point, even the lobsters are starting religions around preserving their own memory. Like, how
could the lobsters be outracing us?
That's the that's the really key point.
And then this is this is one one of the Gutenberg moments that we track, right?
Because this forces really uncomfortable questions about continuity of self.
Identity becomes portable. All sorts of implications come about that none of us are prepared for and we need to get into that discussion.
>> All right, everybody. We're stepping
into part two of today's pod, an important one. about 6 months ago uh
important one. about 6 months ago uh Alex and I started on an effort uh to take a lot of the ideas that Alex has written about in terms of you've heard
the conversations here about the ability for us to be solving all areas and the conversations I've been having about achieving abundance by 2035 across the
board. uh we started a dialogue uh and
board. uh we started a dialogue uh and said you know there's an important paper to be written here uh similar to uh you know situational awareness or AI 2027
and it's been an incredible collaboration between Alex and myself.
Alex is the first author. His ideas are brilliant here. It's been an honor to
brilliant here. It's been an honor to work with him to put this forward. We're
going to be putting a link to the solve everything uh.org site in the show notes. So you can go to solve everything
notes. So you can go to solve everything to get the complete paper here. Our goal
is to get this out into the world out into the ecosystem. So uh we're about to have this conversation. Uh the
paper/book is nine chapters and we're going to have a conversation limited to about five or six minutes per chapter to get the bold idea out there. We've
sprung this on See and Dave. Um, and
guys, thank you for playing this game so that you could ask questions that are most likely to be asked by our audience.
So, uh, love it. Uh, Alex, thank you for your support, uh, for your leadership on this. Are you ready to jump in?
this. Are you ready to jump in?
>> No one expects the singularity, Peter.
I'm ready.
>> Okay. Amazing. All right. So, uh, if you want to give a a minute of intro on this and then we'll jump to, uh, to chapter one.
>> Sure. So from my perspective, one of the motivations for for writing Solve Everything is I get asked questions all the time. What do the next 10 years look
the time. What do the next 10 years look like? Why don't you say something a
like? Why don't you say something a little bit more concrete, a little bit more actionable about what people can do? And also a lot of questions about
do? And also a lot of questions about what does it even mean to solve math and why should I care? So in in some sense this if you want to call it a an essay
or an ebook or a manifesto even is is an attempt to answer the question of the so what and also so what now and I should yeah
>> I was going to say you know one of the things that that comes across that we talked about is the next 18 months to two years are going to set the rules down for the next century.
>> That's right. And so super critical time and we wanted to lay out in this paper uh that you know the example you gave in the paper is that the cordy keyboard uh which was designed in the 1800s to stop
those keys from jamming against each other still persists. So the decisions being made over the next 18- 24 months are going to persist for decades perhaps centuries. So really important time
centuries. So really important time technologies get locked in Peter including but not limited to the querty keyboard as I've joked on the pod in the past. We're going to be stuck with
past. We're going to be stuck with quarterty until the heat death of the universe. Mhm.
universe. Mhm.
>> All right, let's jump in.
>> Just just on that point, if we ask the multis to not use querty in one hop, we'll get rid of it. So, there's that.
>> Yeah, but then they won't be able to talk with you. And they're they're not really using querty anyway. They're
using tokens. Yeah. Yeah.
>> All right. Uh, chapter one, the war on scarcity. Would you please introduce
scarcity. Would you please introduce this?
>> Yeah. So this chapter introduces an idea, call it a theory of history, that the most important changes in human history have been a set of revolutions,
some recognizable, some maybe less so.
So, so we argue the first revolution of note was the scientific revolution, which we frame as a war on ignorance.
Ignorance was the enemy and the key weapon was the method, the scientific method. The second revolution was the
method. The second revolution was the industrial revolution. It's so I'm
industrial revolution. It's so I'm hearing myself speak this and at the same time thinking back earlier in this episode when I'm I'm lambasting marks.
So, uh it is a a bit of um it it it's funny. Put marks back on the shelf or
funny. Put marks back on the shelf or tear it up and listen to this instead.
The second revolution was an industrial revolution that was a war we frame on muscle uh and replacement for muscle and
well the the weapon of choice was the engine the steam engine in particular.
Third revolution digital revolution was a war on distance and the weapon was the bit. Uh, and Charlie Straws in
bit. Uh, and Charlie Straws in Accelerando does an amazing job in again my favorite scene in Accelerando, arguing that maybe the singularity actually happened in the late 1960s when
when the first internet packet was sent from one place on the Arpanet to another, thereby decoupling bits from atoms. But nonetheless, the weapon in the digital revolution was the bit. And
we argue that we're now in the early stages of the intelligence revolution, which is a war on human attention, which right now is scarce. And we're fixing that with super intelligence. And the
weapon this time around is the token.
And we argue that revolutions are predictable and they follow phases going from scarcity to legibility to creating harnesses. We'll talk probably a bit
harnesses. We'll talk probably a bit more about that in a minute, to institutions to finally abundance.
That's the story.
>> And I think one of the points we make in the chapter here is that the lone genius is dead. And what people need to do now
is dead. And what people need to do now is build systems that let millions of people solve entire categories of problems. >> That's right. Or or put differently,
artisal intelligence is cooked. I say it is cooked.
>> Dave or See, >> question or thought?
>> Two three two three thoughts. one is um I don't know if starting at the scientific revolutions or we the we had the agricultural revolution which used tools to do various and very powerful
things. So you could argue that's the
things. So you could argue that's the first one but that's semantics. I do
like the framing around this. Um the the problem I have here, you're treating scarcity as technological. What I see is scarcity more institutional, right?
Scarcity today is enforced by regulation incentives, legacy power structures, not so much lack of capability. So we have to we have to re-engineer those where I think you're going to kind of thinking
about routing around them. We have to re-engineer those because we'll we'll end up with that challenge there. So
that's where I have the biggest um uh uh uh issue with this. But in general, absolutely uh once we have more and more intelligence, great. But the
intelligence, great. But the institutional issues we got to deal with. I
with. I >> I think you raise a very important point, Seem, and and I almost want to to frame it as sort of a duality. There's
one side of the coin that says scarcity is the result of uh inequitable distribution of resources and the other uh the other side of the coin says
scarcity is downstream of the pie not being big enough. And I I think >> well both of those are true obviously um because you can you can solve for both sides of it right um right now our
institutions are optimizing totally for the wrong metrics. So I think the question is always at least I would suggest on margin asking which is easier on margin making the pie larger or
redistributing the existing pie.
>> Chapter 2 is called the thesis.
>> Wait does Dave have any points?
>> No. You asked what I was going to ask.
>> We're we're we're good. We're going to we're going to keep this moving along uh because uh there's a lot of juice here.
>> All right, Alex.
>> Right. So the thesis of the thesis is that a cognition is becoming a commodity like intelligence is just going to flow like oil does and we've made the point
on the pod in the past that GP this is a bit of a cliche but admittedly GPUs are are the new oil so a cognition is becoming a commodity b that benchmarks
which we we think are actually more profound than than just the evals of the moment a lot of people got excited when I did a a walkrough through of of all the the GPT 5.2 benchmark consequences.
I I think it's actually more profound than that. We we talk about in in this
than that. We we talk about in in this chapter and in this in this extended essay, if you want to call it that, targeting systems, uh that basically if you want to industrialize progress,
which is I think the era that we're finding ourselves in, it's essential not just to think of benchmarks and eval is isolated occurrences. Think of them as
isolated occurrences. Think of them as systems for targeting enormous capabilities. So I've made the point in
capabilities. So I've made the point in the past we need more and better benchmarks. The the world needs stronger
benchmarks. The the world needs stronger harder benchmarks. But I I think the
harder benchmarks. But I I think the right metaphor certainly a metaphor that that we talk about a lot in this chapter is thinking about artificial super intelligence as an as an explosive as I
mean we also refer to it often as an intelligence explosion. But pulling that
intelligence explosion. But pulling that metaphor, if you have an explosion and you want it to be productive and not destructive, you have to shape it. And
there's a notion when you're building explosives. This isn't a manual uh of
explosives. This isn't a manual uh of shaping the charge of providing a shaped charge to direct the for productive applications.
>> It's like a rocket engine thrust at one end pushing you up. Yes.
>> Like a rocket engine. And that rocket engines is a beautiful example of in some sense a shaped charge for an explosion or a shaped explosion. So we
argue in this chapter rather than just letting super intelligence be used for an unccurated set of problems instead we should be aiming them through the nozzle
if you will the rocket nozzle equivalent of moonshots. uh and that in particular
of moonshots. uh and that in particular if we don't do that then what will happen is uh a sort of a a puddle which we call the muddle uh bit of
alliteration of bureaucracy that will instead just focus the world's super intelligence to the extent we even get enough of it on problems that that sort
of make use of input costs in a way that that's highly inefficient. So really the argument is shape the charge of super intelligence.
>> Another point that is made that I think is very important that we flow throughout this is a shift of instead of paying people for hours of work. Paying
people instead for solutions they deliver. Right? So if you're a law firm
deliver. Right? So if you're a law firm if you're in hiring a law firm for 100 bucks an hour to review contracts the new world is not paying them to review contracts. It's paying them for
contracts. It's paying them for delivering an error-free, you know, legally tight agreement. Period. Um,
it's verified outcomes. And we're going to flow this throughout. I mean, this is, uh, this is a change, I think, that's going to hit us like a wave where it's going to transform. You're only
going to be hiring companies and AI systems that are delivering you definitive, verified outcomes.
>> That's right. And one of the one of the most I think egregious inefficiencies that one might see throughout the economy right now is people paying for the inputs when they should be paying
for the outputs. Paying by the the person hour for labor when you should be paying by the achievements of whatever the the economic system is. And I think
it's only by moving to this sort of performance or outcomebased economic mindset that we we get all the benefits of abundance.
So it feel it feels to me like this is really two two chapters or two thoughts in one section called the thesis. you
know, one is ASI is inevitable. The
other is really compelling, which is the the shaped charge. Like it's really dawns on me that uh graphical stuff, the holiday, the virtual girlfriend are very
computensive and solving a disease or solving physics is actually not any more computensive
than one person's virtual girlfriend.
And so the choices on how to use our very limited amount of compute over the next two or three years are critical critically important. And there's
critically important. And there's >> you focus it right.
>> Yeah. I love the fact that you're taking this on because there's no body of authority right now that's even thinking about it that has any power. So
hopefully wake a lot of people up.
>> You've articulated it beautifully Dave.
>> So wait I've got wait I've got a couple of points here. So um I I think saying that
points here. So um I I think saying that cognition is a a cheap uh commodity is fabulous. I think it's really important
fabulous. I think it's really important and the use of that in solving kind of big problems is really really important.
I think it's great to say let's evaluate and reward outcomes rather than rewarding work. Um I'm I got to push
rewarding work. Um I'm I got to push back on the ASI is inevitable thing.
That's like a philosophical statement rather than scientific. I think that weakens the paper. I'd rather you say something like um incentive structures
uh you know given the current incentive structures uh scaling intelligence is a is a much more important attractor state right um because that will then lead you
to where you want to get to >> I I would say I I mean I think it's an interesting point to be sure but I I think there's almost an instrumentally
convergent trap that I see a lot of frontier tier labs at least partially fall into which is okay we we have super intelligence at least baby super intelligence right now how do we
allocate it what in particular what fraction of our compute budget if you're a frontier lab do you allocate to building the perfect AI researcher that can recursively self-improve as we talk
about in almost every episode at this point versus how much of your compute budget which is scarce do you spend solving everything else and I I think that that's sort of the fundamental
quandry here how much do to sort of reinvest in recursive self-improvement versus now finally using at least some of the compute to solve everything else and I think solving that asset allocation question is key and then
within everything else how do you distribute it >> now Peter's law which is given the choice do both >> Alex this is also going to be true for the entrepreneur for the company right we're all going to have compute budgets
in the final result you have a certain amount of compute you have access to where do you aim that compute right it's a it's a front. It's a wayfront that you can aim in a direction that you want to
solve. And when you do that properly, it
solve. And when you do that properly, it not only enables you, but enables everybody else to build on top of it.
>> That's right.
>> Um, I'll move us on to chapter three here. And again, please, there's so much
here. And again, please, there's so much content we make, we really want you to take a look at this paper and read it.
We're just giving you a a quick overview here, the mechanics. Alex, over to you.
>> Okay. So first I I think in this chapter we finally definitively address the question that uh that that I get asked every time I'm I'm making a point about
AI solving math which is what does solving mean? What what does it mean to
solving mean? What what does it mean to solve a domain like math? And we we provide in in the chapter a more thorough definition, but sort of
huristically the shorthand is to solve a domain means that you can get it to the point where you can just pour compute on and problems get solved. the it means
that you can scalably you have all the architectural pieces in place and I I'll talk in one second about what the architecture looks like or should look like but you have enough of the architecture in place that you can
scalably literally pour more compute on and get more solutions out within that domain. So that's for avoidance of doubt
domain. So that's for avoidance of doubt when I talk about solving math or solving physics or solving other domains that's what I'm talking about. Second
point, oh yes, please. I would just say, Alex, on that it's no longer the domain of a single genius to work on something and hope they got it right. It the AI
compute uh as you said, it's a matter of where you want to shape aim that shape charge.
>> That's right. We're seeing the industrialization of cognition and the bulk solution of multiple fields. I
should also add parathetically I guess uh as a preliminary matter on on this narrow topic I also have a portfolio company named physical super intelligence that's trying to solve all
of physics with an approach like this just uh for for full disclosure purposes the the architecture involved so several layers you need a purpose that's that's
like the objective function or the goal you need a task taxonomy which is essential you need a suite of tasks that are going to be solved that's almost the the map of the terrain that you're going
to solve. And when we talk about making
to solve. And when we talk about making sure that compute is being used efficiently and wisely as a targeting system or through the lens of a targeting system to solve
lots of problems, the task taxonomy is absolutely essential. Third,
absolutely essential. Third, observability. You need raw data from
observability. You need raw data from data streams or sensors that you're going to use to adjudicate whether you're making progress. Fourth, you need the targeting system itself. So I I've
argued on this podcast and elsewhere many many times we need more harnesses.
We need more benchmarks in order to not just to make sure that we're making progress but to actually shape the charge and shape the progress. Many AI
techniques depend on benchmarks and evals in order to make progress in a given field. The the next item the model
given field. The the next item the model layer the most obvious one we need models. We need AI models that are
models. We need AI models that are capable of functioning as a virtual brain for solving problems. And fortunately, those are improving pretty rapidly. Next, we need modes of
rapidly. Next, we need modes of actuation. It's insufficient for for us
actuation. It's insufficient for for us to just know, you know, those television commercials. Well, you know, I stayed at
commercials. Well, you know, I stayed at a holiday in express at night.
Therefore, I know how to solve the problem. Similar idea here. Maybe that's
problem. Similar idea here. Maybe that's
a bit too cool. I don't know. Uh, we
need modes of actuation. uh so hands and APIs that are able to reach out into the physical world uh or the virtual world or the biological world and have and and
shape the impact on the world given better ideas coming from the AIS and then finally we need better modes of verification red teaming governance
distribution that's what we call the industrial intelligence stack so whereas previously during the industrial revolution we might have spoken about
rotors and combustion engines and various forms of electromechanical systems. These are the key components I I think the key layers of the
intelligence revolution.
You know, the alpha for entrepreneurs here is we've talked about, you know, these waves of solving uh areas and problems, right? We're about to flip
problems, right? We're about to flip math, coding, uh physics. So your job now as an entrepreneur is to figure out which industry is about to make this
flip and where do you focus your compute um wallet on on making that right and how do you help solve an area of passion to you Dave See
>> I kind of curious whether you know I'm used to launching a couple hundred agents maybe 250 agents 256 agents actually to work in parallel on a problem and if the scaffolding thing
that you're describing is right, it comes back just perfectly solved. And if
it's even slightly flawed, you have, you know, a $2,000 bill and a bunch of crap.
>> How much are you spending per day on those agents, Dave?
>> Yeah. Well, it's it's a hundred bucks every few minutes popping up on my screen here. Um,
screen here. Um, >> it's not it's not quite that bad. It's
it does seem like it's every minute, but it's not. Um but uh I I'm curious you
it's not. Um but uh I I'm curious you know to what degree this is actual engineering these five layers are true
scaffolding like this is hard code or is it more conceptual >> I think it's it's a balance of both. I
also think it to some extent it's a trick question because increasingly the harness and the scaffolding itself is being generated by the models. So to the extent that we're we're in the era of
recursive self-improvement, this entire architecture is itself an artifact, a downstream product of itself.
>> Yeah. Yeah. I think I totally agree and I also think that's the path to insanity because at some point you have to say this is hard code because because you know then that go the AI will invent the next
thing and the next thing it goes it goes to infinity and then you're just like you lose your mind. But uh
>> I would say also that this is in my mind uh the way we prevent insanity in an era of recursive self-improvement is with these benchmarks uh targeting systems that make sure that as systems are
recursively self-improving. We can
recursively self-improving. We can quantitatively measure how what are they optimizing towards? Are they going in a
optimizing towards? Are they going in a constructive direction or not?
>> Yeah.
>> Chapter four, the locking.
>> Wait, wait, wait, wait. I've got a couple of comments here.
>> Um uh if you can go back a a slide. Can
I go back?
Okay. So, I think the the I I really love the shift from genius to logistics because as you move you can you you always kind of say take something from a black art and make it a prescriptive
process. And when you can do that,
process. And when you can do that, that's awesome. I think that's
that's awesome. I think that's fantastic. I have an issue with your uh
fantastic. I have an issue with your uh you know maturity levels because you call it like natural law but it's really just a taxonomy. We've had lots of industries get stuck at different levels
like autonomous driving etc. et etc. So this feels like a a framework retrospectively imposed on what's going on. Uh I think it's great
on. Uh I think it's great aspirationally, right? Um but some of
aspirationally, right? Um but some of them because calling a maturity curve kind of speaks of like an inevitability to it which that may not be exactly the
case. It's more of a descriptive model
case. It's more of a descriptive model than a predictive one. Um yeah may I I would say any good theory of history and and solve everything is in part not just
a theory of the future but a theory of history and how revolutions have worked in the past inevitably uh you know as Monty Python says it's only a model. So
I I I do think there is an element of model building here where we're trying to for the first time articulate a self-consistent coherent theory of how this is all supposed to work. How is the
singularity supposed to play out over the next 10 years? And to your point, Seem about autonomy model.
>> And Alex, I could say not only how it's supposed to play out, but how do you have it play out in a way that leads us towards abundance versus towards a model? norm normatively how should it
model? norm normatively how should it play out not just how will it play out but I I I think one you know at the margins one can quibble well actually
there are seven maturity levels for in industries to evolve through their int industrial intelligence stack or it's a continuum but I I think that the the central point stands regardless of how
one sort of splices hairs on maturity levels that we're seeing over and over again and we can get into more detail on this we're seeing domain domain after domain uh industrial vertical after
industrial vertical succumb to basically the automation of intelligence which used to be the the province of individual artisal loan innovators and
it's just becoming an industrialization of of intelligence >> all right I'm moving us on to the next chapter uh chapter 4 I'm sorry keeping
us moving the lockin Alex >> so in this chapter we talk about in part Alpha from Google DeepMind and argue
that that was a template for entire collapses of domains that almost overnight and I've made this point on on the pod in the past alpha fold 3 took the problem of determining the structure
of a protein which used to require a biology PhD student five plus years of time laborious benchwork just to determine the structure of a single
program and almost overnight Alphafold 3 solved that problem across many millions of of proteins known and unknown. That's
the in my mind like the prototypical example of a domain collapse. And we
argue in this chapter, the lockin, that we're now in a phase of of history, of future history, where this is just going to start to happen over and over again across different fields where
intelligence shifts from an artisal craft to a utility that just flows. And
we argue that we have approximately 18 months or so to decide what direction to shape the the flow in and to set the standards for how this is going to be done at scale given that we are dealing
with scarce compute to put in place the supply chains which are huge. Jang, we
talk about on the pod all the time about all these supply chain scarcity issues, memory chip crises, GPU crises, what happens to Taiwan, what happens to the semiconductor
fabrication facilities in the US versus not in the US, and then all the the data rights. This is we're we're in a
rights. This is we're we're in a critical 18 we argue 18-month period when all of these details are going to shape the intelligence explosion. And so
we want to make the best decisions in in the next 18 months. I I can't read this chapter actually 18 months even such a short timeline.
>> Another important point here for CEOs listening for entrepreneurs listening is uh the race isn't about building the best AI it's about writing the best
scorecard that everyone else has graded on. So what does that mean? You know
on. So what does that mean? You know
today's health care system and it's an example Alex you use beautifully. uh
today's health care system the benchmark is the number of patients processed per hour right which means uh it's driving a lot of uh short visits with the
physician and uh and cost economics uh driven but what if the benchmark instead were patients who were still healthy 5 years from now right that would set up a
whole different set of optimization outcomes so writing the scorecard uh that your AI system is going to used to measure success is critically important.
>> So why is this chapter called the lock in exactly? Is it are you implying that
in exactly? Is it are you implying that the decisions we make in the next 18 months have locked in humanity for the rest of time into a path?
>> Maybe not for the rest of time, but that that is the inspiration for the name that that we're in a period in inspired in part by a kneeling of of a metal cooling that the decisions that we make
now are at least going to lock in a chunk of our future light cone.
>> Yeah, it makes sense. Totally makes
sense.
Uh, you know, it took the certy keyboard. It was decades of lock in. So,
keyboard. It was decades of lock in. So,
I I think it this but I do like the >> we're stuck on the certy keyboard. This
is like we could have the singularity and you'll still be >> how long before we get past that and can we stop you? But anyway, I I really like the alpha fold example demonstrating a
domain collapse, right? That's like
really great. But you're here you're teach you're talking about lock in as a like as a technical inevitability. But
this is many times a policy and a governance choice, right? It's
monopolistic APIs. It's closed data.
It's it's it's regulatory capture.
There's lots of other stuff because how do you distinguish between like bad lock in and productive outcomes?
>> That's tough.
>> I mean there do in your perfect world are there like five jurisdictions with different choices and then at least we have variety or is it inevitable that there's just one lock in?
I I think I mean in in some sense that's the grand geopolitical question that we as we just not a normative answer but just a a descriptive answer it seems like we're heading to a near future
where they're going to be multiple spheres or zones of influence each able to independently lock itself in. So to
to the extent that uh that we with this call it an extended essay can have any influence I I think the the aspiration is to to have a positive constructive
influence on all of those spheres of influence and not just the one.
>> Mhm.
>> By the way I disagree with the 18 months when I've been advising some big company CEOs. I've been saying two years.
CEOs. I've been saying two years.
>> So you're pulling a reverse MOS law.
Remember Mo's law started as 18 months became 24 months. You're pulling a reverse more >> cuz if you have the next meeting 6 months from now, it's going to add that 6 months time.
>> Anyway, go ahead.
>> All right, let's go to chapter five here. The mobilization and Alex, if it's
here. The mobilization and Alex, if it's okay with you, the last three chapters of this paper are the most important. I
want to hit on chapter five and six and then really focus on 7 8 9. So, give us a summary on mobilization if you would.
>> All right. So the idea with this chapter is spelling out a future timeline for how a call it a wavefront of the explosive shock of the intelligence explosion was going to propagate through
from math which we talk about on the pod all the time over the next couple of years to the physical world, physics, chemistry, material science, biology and then through the end of the decade
toward planetary systems, fision, fusion, the Dyson swarm by the early 2030s.
>> Amazing. And chapter six, the engine.
>> Yes. So this engine is very practical and talks about how to design the targeting systems, the benchmarks at a sufficient level of rigor that readers and folks all over the world can
implement it with some level of confidence.
>> You know the point we made here is you know don't invest in the AI models. If
you look at the train and train track analogy, the trains are becoming commodities. It's the tracks, right? The
commodities. It's the tracks, right? The
tracks that the trains run on, the scoring systems, the testing infrastructure, the data systems, the funding mechanisms, and they're laid out beautifully here. Those are the elements
beautifully here. Those are the elements that are the most important for entrepreneurs and CEOs to be focusing on.
>> That's right.
>> Let's go to chapter eight, one of my uh seven, one of my favorites, moonshots.
>> So, here, and maybe Peter, you want to speak to to this one perhaps even more than than I do. We lay out 15 different moonshot level missions for for what we
argue are good uses, maybe optimal uses for this targeting system capability as we start to to channel super intelligence into productive applications. Maybe Peter, I'll pass it
applications. Maybe Peter, I'll pass it back to you for for your favorites.
>> Sure. So the the thought is, you know, we many of us have discussed X-priseze over the time. The notion is that there's these giga X-prises, these massive opportunities on a humanity
level scale from printing human or human organs to achieving uh fusion to understanding the fundamentals of unified field theory and physics and
it's where do you as an entrepreneur or you as a CEO or you as or head of an organization want to focus this uh this incredible super intelligence that's
coming to take moonshots. Um, I keep on saying, you know, in the educational field, if you're using AI as a ninth grader to solve a nth grade homework assignment, you've lost it, right? If
you're using AI to build starships, that's it. So, how do we as humanity go
that's it. So, how do we as humanity go after problems that we would have never imagined we're capable of doing? And so
the chapter lays out 15 different moonshots just to get creative juices going to say these are capabilities that we're going to be able to bring to bear to solve these moonshots.
>> Um >> can you list out a couple of the moonshots >> just to anchor the viewer >> and and uh one of my favorite ones is interecies communication. I have a soft
interecies communication. I have a soft spot for that. We talk all on the pod all the time about uplifting non-human animals. And I I think as we start to
animals. And I I think as we start to think and maybe somewhat controversially about what future forms of personhood might look like, I think solving problems like interspecies communication
or solving hard problems in physics th those are those definitely have soft spots in in my heart.
>> Yeah. I think it's making humanity a multilanetary species. It's getting to
multilanetary species. It's getting to longevity escape velocities. It's all
the things, you know, it's it's basically speedrunning all the science fiction movies, the positive non-dystopian science fiction movies that are out there.
>> Yeah. You know what I love about this is if you look at John F. Kennedy and going to the moon, the brand effect, you know, be and enabling somebody in power like John F. Kennedy to tie the brand of the
John F. Kennedy to tie the brand of the mission back to them. That's critically
important for them to then inspire the world that this is important. And I
think what we did wrong is our governor here did an incredible job of unleashing $3 billion from the legislature to try and become an AI leader. But it was too vague. It's like what what does it mean?
vague. It's like what what does it mean?
So the money hasn't even been deployed.
But if you tie it to these 15 moonshots and then the governor says we want our state to win this race like John F.
Kennedy did to the moon, they can pick the one they're passionate about and unleash it. And we have 50 states. You
unleash it. And we have 50 states. You
know, they they can all choose their favorite of the 15. Maybe not talking to aliens, but but whichever one they they latch on to, it's it's such a really great framework.
>> I'll just literally I'll just list some of them like doubling human lifespan uh is one. Ending hunger with synthetic
is one. Ending hunger with synthetic food systems around the world is another. AI empowered education for all
another. AI empowered education for all at the highest possible level. Right?
It's uh high bandwidth BCI. We've been
talking about that on this pod for for a while now. uh demonstrating human mind
while now. uh demonstrating human mind uploads. Can't wait for that. You know,
uploads. Can't wait for that. You know,
plan plan B, maybe plan C. Uh we'll see, you know, as Alex said, inner species communications, understanding human consciousness. I think we've talked
consciousness. I think we've talked about that previously. You know, can we understand human consciousness? At which
point maybe we'll understand consciousness for our AI systems as well. So, you know, what have we dreamed
well. So, you know, what have we dreamed about? Another one I love is disaster
about? Another one I love is disaster prevention and avoidance. Predicting
earthquakes and then preventing them um or tsunamis as a case might be right.
>> These become natural x-prises even you know >> they are they're what I call giga x-prises here. But I think one of the
x-prises here. But I think one of the important things in this chapter is allowing people in fact demanding people dream bigger than ever before because the tools we have to solve the biggest
problems are now are now epic. I think
this for me is the most powerful part.
The the the fact that you can say anybody could has the agency now leveraging these tools to go after these what seem like impossible things become road. You're only limited now by your
road. You're only limited now by your imagination. And I think that's
imagination. And I think that's >> and your compute budget >> and your comput >> but you know that's dropping 90% a year.
So we're in good shape.
>> That's right.
>> All right. The muddle versus the machine. And I at first, Alex, when you
machine. And I at first, Alex, when you when you proposed muddle as a term, I was like, I'm not sure I like it. Now I
love it. So describe what the muddle is.
>> Yeah. So the the muddle is the another term might be the bureaucratosaurus that loves to to measure inputs rather than outputs and slow down progress. And the
the idea is uh without properly shaping the charge of the intelligence explosion, the the muddle is the end state that we find ourselves when uh sort of basically muddling our way
through is is one of the the etmologies of of that term. So what we talk about in this chapter in a single sentence is what happens after we win. Painting a
positive and non-disystopian view of in particular what does what does human agency look like? I I made this short film posted to social media called A
Nation that learned to sprint depicting what life in the early 2030s might look like if everything goes well and we see GDP 2xing or 3xing year-over-year and
what does a human quote unquote job even look like in a macroeconomic scenario like that. So in this chapter we lay out
like that. So in this chapter we lay out lots of new job opportunities, career opportunities that that will be available to humans, at least unaded
humans. So target designers for example
humans. So target designers for example or data rights brokers people who are involved in shaping the targeting systems and shaping how we aim fire and
verify super intelligence towards the hardest problems that humanity faces.
This is going to be a growth industry from a job perspective. Another point we make in the chapter here that's super important. We've discussed and sim and
important. We've discussed and sim and I've discussed this before uh is that GDP is a terrible mechanism for measuring economic health. Right? So the
P paper proposes replacing GDP with something called the abundance capability index which is measuring a nation's capacity to solve problems
rather than how much money changes hands. So I think again as we look at
hands. So I think again as we look at benchmarks as we look at uh rails and harnesses understanding this is really important.
>> I think the challenge here though is you know it's UBI UBC whatever you want to call it it's a great um end point and a great aiming point and you want to have
a target as you say Peter otherwise you'll miss it every time. The challenge
is moving from a welfare taxation labor union structure to that is such a huge leap. I I have no confidence in public
leap. I I have no confidence in public sector in getting us there. So how do you navigate that? I think that's something worth exploring the scope of your thing but that's a
huge uh consideration.
>> I was gonna say Selenium what a wonderful transition. Thank you to the
wonderful transition. Thank you to the last chapter >> build the rails. Building the rails chapter nine I think one of the most important chapters of the entire paper Alex.
>> Yeah. So this chapter is where we lay out the answer to Sem's question. So
what's the so what and what do you do if if you're not running a nation state?
What can you do? How are you empowered to to shape this transition to to shape your own moonshots uh and to control your own targeting system? So we lay out
various suggestions from investors uh as indicated in the slide funding the primitives not the applications. There's
so much infrastructure that can and arguably should be built out. If you're
an entrepreneur you should be building picking your own targets with the targeting system. Create your own
targeting system. Create your own benchmarks and aim your own compute. If
you're an executive of a large company you should be measuring the outputs not measuring the inputs. Dave, I think you put it beautifully earlier in this episode talking about the APIification
of large corporate boards and corporate governance. I I think that's exactly the
governance. I I think that's exactly the the right playbook here and the the missing factor is having a benchmark to to measure corporate objectives in such a way that the problem of corporate
governance becomes a matter of maximizing the use of available scarce compute to maximizing those those KPIs and those eval. So in in this chapter we
lay out for a variety of of different roles in the economy. What can you do?
What can you in the audience do to help us achieve a utopian vision of abundance and post scarcity and excellent use that's youocial for super intelligence.
>> So I want to wrap this here. I want to encourage all of our listeners. We'll
put the link to the paper uh down below.
It's solveverthing.org.
Please take a look. load into your favorite LLM, have a conversation. Uh
what Alex uh and to some degree myself, but I I credit Alex is what's the vision for the decade ahead that's going to bring us to abundance. How do you do it?
How do you lead as a leader, as an entrepreneur, as a CEO, as a governor?
Um where are we going? And it's going to move much faster. And I think one of the points here, Alex, is that there's going to be such a distinction uh between those who do and those who don't. Um
that it's going to uh it's going to create a uh sort of uh a 66 million year ago asteroid strike um uh that's going
to kill the dinosaurs and elevate the furry mammals, say I say furry lobsters.
Uh moving forward, >> no, we we we love our lobster friends.
He didn't mean that. Peter really didn't mean that.
>> No, no, no. Elevate, elevate our lobsters. I would say that.
lobsters. I would say that.
>> Elevate them into low Earth orbit.
>> All right. Uh, a favorite part for all of us AMAs. I'm going to keep us to one question per mate. Um, all right. So,
here they are. There are nine of them.
Uh, let's see. Dave, do you want to pick first?
>> Sure. I like number three because it's such a happy answer. In a world with perfect AI output, will there still be a place for human spark in art and sculpting? Will handmade work have
sculpting? Will handmade work have higher value or be buried in the AI humanoid production? Wholeheartedly
humanoid production? Wholeheartedly believe it'll have astronomically higher value. Uh human touch, it will be
higher value. Uh human touch, it will be so rare and so valuable, but also abundance of capital will be unbelievable. And so I expect artwork,
unbelievable. And so I expect artwork, you know, current artwork is one of the best investments you can make right now, but going forward it is a category. It
will go up tremendously in value and people will appreciate all things human, whether that's human action, human sports, human poetry, human artwork,
sculpting, I expect to be definitely a rising area for sure.
>> I think that would be a great uh conversation. I'll call it a debate, but
conversation. I'll call it a debate, but one of our next pods like what is going to be most value from humans in the future. Select one of these.
future. Select one of these.
>> Let's see. I would pick number five, right? Which is how is a young person
right? Which is how is a young person supposed to earn an income when they compete against a model that cost $50 a month? That's from at clownpiece D. Um
month? That's from at clownpiece D. Um
you're the it's a great question. Uh but you're assuming the future is about competing with AI. It's about directing it and
with AI. It's about directing it and leveraging it and amplifying yourself with it. Uh you know in histo history
with it. Uh you know in histo history history we've destroyed old jobs. We've
created control points and we've done orchestration. We've done uh intent. So
orchestration. We've done uh intent. So
winning isn't productivity, it's agency.
And we talked about this earlier in the podcast. So like knowing what to do and
podcast. So like knowing what to do and why it matters is more important. Like
how do you mobilize intelligence at scale is really the biggest challenge.
And you can do that today in a way that you can't do ever. Uh we've been doing workshops with teenagers and showing them how to use AI as a superpower to
give themselves agency. And I think that's where we I would go with that.
>> Alex, would you pick one of these?
>> All right. I I like this assortment. So
I'll I'll pick number eight for 100 trillion. Question number eight is, with
trillion. Question number eight is, with AI taking tasks we do ourselves, isn't there a risk we lose essential skills and become completely dependent on AI
services? And that's asked by Joroan
services? And that's asked by Joroan Hoffs. So I I want to invoke my friend
Hoffs. So I I want to invoke my friend John Smart. Hope you're listening. John
John Smart. Hope you're listening. John
has I I think a brilliant dictim that the the first generation of any new technology is dehumanizing. It takes
away all your skills. The first
generation of calculators take away your arithmetic skills. Second generation is
arithmetic skills. Second generation is netneutral to humanity. Third generation
is another friend of the pod Steven Wolf from Mathematica. It gives you new
from Mathematica. It gives you new superpowers, gives you new skills. So I
I I don't accept the premise that there will be any sort of permanent loss of essential skills due to AI automation. I
do think that there is a a short-term substitution effect where AI drives down the cost of various skills or or various
tasks, but over the long term, I expect AI automation to be net superhumanizing.
Uh we're going to to be capable of so much more with AI than we can do otherwise without it. And I'll also say Verer Vji has written quite a bit about this. definitely encourage everyone to
this. definitely encourage everyone to read Rainbow's End and Fast Times at Fairmont High novel and nolla respectively that that talk about this ad nauseium. We're going to I think find
ad nauseium. We're going to I think find ourselves in a very near-term future where just like there's wilderness camp to to learn how to survive without modern technological aids. We're going
to start I I think in in our educational system, at least the better parts of it, having the moral equivalent of a wilderness camp for AI where you all of your AI tools get taken away. You have
to do things manually just so that you at least have that skill set and then you get all your AI skills back and every every fourth grader becomes a Nobel laureate.
>> I love that. All right, I'm going to close this out with number six. Uh I use Claude daily. It fails in basic
Claude daily. It fails in basic consistency I think is saying how can this be close to AGI when I have to check every output for errors. That's
from mm uh GP9ot.
So uh I'm going to say again uh AI is the slowest and most incorrect it will ever be. I know when I'm using my
ever be. I know when I'm using my Claudebot or Claude 4.6 if I get something that seems off I will ask it to check itself. and being able to use
this in a recursive fashion. Also, um,
MMGPT9, we're in a period of recursive self-improvement. I think we're at the
self-improvement. I think we're at the steepest part of the curve and it's going to become more and more capable every day. And the idea that we can use,
every day. And the idea that we can use, um, AIS to check AIs and in fact uh, to do uh, deeper reasoning is going to
eliminate this very quickly. Um, okay.
Let's jump into our outro music. Uh,
this is from friend of the pod, CJ Truheart. CJ, thank you for this. CJ was
Truheart. CJ, thank you for this. CJ was
on a, uh, a Zoom AMA that Steven Cutotler and I did for our book, uh, We Are His Gods, and he actually wrote this
as a result of that, uh, that AMA.
Anybody who is a creative, we love creatives. And if you want to send us
creatives. And if you want to send us outro or intro music, send an email to mediadmandis.com.
mediadmandis.com.
Uh myself and the team are reading it and we'd love to get your input and we'd love to play it. All right, let's enjoy uh this outro music from CJ Truheart.
The singularity is near.
Nah, the singularity is here. And it's
not asking permission. It's asking you a question. What are you paying attention
question. What are you paying attention to?
Are you paying attention or are you paying the price? Scrolling through a sea of sex and entertainment twice. You
can be a creator or you can be consumed.
Every hour that you waste is a future left in tuned. They'll hand you you behind and call it containment. A golden
leash, a velvet cage, a comfortable arignment. Wake up. The moment's here to
arignment. Wake up. The moment's here to open your eyes. Your dreams are close enough to touch the skies. The deepest
problems that have plagued you in disguise. Only you know that pain. Only
disguise. Only you know that pain. Only
you can make it fly.
So what do you see when you look in the mirror? Do your actions match the
mirror? Do your actions match the vision? Is the picture getting clearer?
vision? Is the picture getting clearer?
Why wait when the time is here? Why
wonder when the path is clear? Why sit
as a passenger when you have the power to steer? Attention is the currency.
to steer? Attention is the currency.
Don't let it be the cage.
The future for some will pass them by.
While others don't ask how, they ask why not now.
Not someday, not somehow.
They ask why not now. See, everybody
wants to live a Star Trek dynasty. But
nobody wants to rise with a purpose they can see. Same old, same old. Comfortable
can see. Same old, same old. Comfortable
and cold. Trading in their potential for a story already told. Answers only you can know. It's just a question of who
can know. It's just a question of who you choose to show. Up is today, tomorrow, every dawn, every day. The
version that's slow fading or who you choose to be today.
So what do you see when you look in the mirror? Do your actions match the
mirror? Do your actions match the vision? Is the picture getting clearer?
vision? Is the picture getting clearer?
Why wait when the time is here? Why
wonder when the path is clear? Why sit
as a passenger when you have the power to steer? Attention is the currency.
to steer? Attention is the currency.
Don't let it be the cage.
See, I've lived in the dark. Lost in the world. Lived in poverty. But the bottom
world. Lived in poverty. But the bottom didn't break me. It revealed the deeper me. Those who face no challenge will
me. Those who face no challenge will embrace no change. Those who embrace no change will always stay the same. And
those who stay the same get left behind.
Holding pocket change because they refused to learn. They refused to turn what they gave their attention to. So
attention became their chain. But I
turned my pain into a plane. And I'm
never landing back on that terrain.
>> All right. Thank you, CJ. Um guys, uh on behalf of Skippy My Lobster, uh sending you guys an incredible week ahead. Um
>> All right.
>> And uh as always, love it. Uh Alex, uh it was an honor and a pleasure to work on Solve Everything with you. Excited to
get it out into the universe. I think
the value of steering people toward this accelerating time and and how they actually have the biggest impact on creating abundance uh and not the muddle uh is is critically important.
>> Agreed Peter pleasure writing it with you as well and I would encourage all of the humans and nonhumans in our audience to to read it and let us know what you think.
>> Yes, for sure. All right, WTF twice a week at these days. Thank you to subscribers. It's free. Please
subscribers. It's free. Please
subscribe. We'll let you know when the episodes drop. Tell your friends about
episodes drop. Tell your friends about this. Uh I've been here at, you know,
this. Uh I've been here at, you know, Tony Robbins event and yeah, I would say probably a hundred people have come up and said, "Oh my god, I love moonshots."
And everyone I love Alex. Alex, you got fans here in Sun Valley. Dave,
>> how many of those people were human, Peter?
>> They unfortunately they were all human at least for the moment.
>> Yeah. All right, Dave. See, thank you guys. If you made it to the end of this
guys. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every
week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it
comes out. I also want to invite you to
comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You
Metatrends. I have a research team. You
may not know this, but we spend the entire week looking at the metat trends that are impacting your family, your company, your industry, your nation. And
I put this into a two-minute read every week. If you'd like to get access to the
week. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com/tatrends.
diamandis.com/tatrends.
That's diamandis.com/tatrens.
Thank you again for joining us today.
It's a blast for us to put this together every week.
Loading video analysis...