Marc Andreessen introspects on Death of the Browser, Pi + OpenClaw, and Why "This Time Is Different"
By Latent Space
Summary
Topics Covered
- AI is an 80-year overnight success built on decades of research
- Four fundamental breakthroughs prove AI is working and will sweep through everything
- An AI agent is just LLM plus Unix shell plus file system plus cron job
- AI gives individual founders superpowers to scale like never before
- Both AI utopians and doomers are too optimistic because the real world is messy
Full Transcript
something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic. Having said
excessively apocalyptic. Having said that, I think what's actually happened is an enormous amount of technical progress that built up over time. And
like for for example, we now know that neural network is the correct architecture. And I will tell you like
architecture. And I will tell you like there was a 60-year run where that was like a you know, or even 70 years where that was controversial. And so so the way I think about what's happening is basically I think about basically the the period we're in right now is it's I
call it an 80year overnight success, right? which is like it's an overnight
right? which is like it's an overnight success cuz it's like bam, you know, chat GPT hits and then and then 01 hits and then, you know, open claw hits and like, you know, these are open these these are like overnight like radical overnight transformative successes, but
they're drawing on an 80year sort of wellspring backlog, you know, of of of ideas and thinking. It's not just that it's all brand new, it's that it's an unlock of all of these decades of like very serious hardcore research. If I
were 18, like this is 100 this is what I would be spending all of my time on.
This is like such an incredible conceptual breakthrough.
Before we get into today's episode, I just have a small message for listeners.
Thank you. We would not be able to bring you the AI engineering, science, and entertainment content that you so clearly want if you didn't choose to also click in and tune into our content.
We've been approached by sponsors on an almost daily basis. But fortunately,
enough of you actually subscribe to us to keep all this sustainable without ads, and we want to keep it that way.
But I just have one favor to ask all of you. The single most powerful,
you. The single most powerful, completely free thing you can do is to click that subscribe button. It's the
only thing I'll ever ask of you, and it means absolutely everything to me and my team that works so hard to bring the Inspace to you each and every week. If
you do it, I promise you, we'll never stop working to make the show even better. Now, let's get into it.
better. Now, let's get into it.
Hey everyone, welcome to the Lydian Space Podcast. This is Allesio, founder
Space Podcast. This is Allesio, founder of Colonel Labs, and I'm joined by Spix, editor of L and Space. Hello. And we're
in A16Z with a uh Mark and Jason Gson.
Welcome.
Yes. Yes. A and what half of 16 something like A1. Exactly. Exactly.
A1. Exactly. Exactly.
Uh apparently this is the the final few days in your your current office. You're
moving across the road.
Uh we're Yeah, we have we have some we have some projects underway, but yeah, this is actually this is the original we're in actually the original office.
We're in the we're in the we're in where the whole thing started.
It's beautiful. Great. Thank you.
So, I have to come out. this is a, you know, I wanted to pick a spicy start. In
October 2022, I just made friends with Rune and, uh, I wanted to give him something to sort of be spicy about. And
I said, uh, it'll never not be funny that A6Z was constantly going, "The future is where the smart people choose to spend their time and then going deep into crypto and not in AI." And that was
in October 22 2022. And Run says there was an internal meeting in A6Z to reorient around Genai. Obviously, you
have, but was there a meeting? What What
was that? I mean I don't look I've been doing AI since the late 80s. So I I don't know like all as far as I'm concerned this stuff is all Johnny come lately.
Yeah. I mean look we've been doing AI our entire existence. I mean we've been doing AI machine learning you know deep we've been doing this stuff way from the beginning. Obviously a AI is just core
beginning. Obviously a AI is just core to computer science. I I I actually view them as like quite uh quite continuous.
Um you know Ben and I both have computer science degrees. Um you know we we both
science degrees. Um you know we we both Ben and I actually both are old enough to remember the actual AI boom in the 1980s.
There was there was a big AI boom at the time. Um and there was a lener names
time. Um and there was a lener names like expert systems um and the era of like lisp and list machines. Um I I coded in lisp. I was coding in lisp in 1989 where that was the the language of the
AI future. Um yeah so this is something
AI future. Um yeah so this is something that we're like completely you've completely comfortable with and been doing the whole time and are very enthusiastic about.
Is there a strong like this time is different because uh my closest analog was 201617 there was also an AI boom and it peted out very very quickly. Um just
just in terms of investing sort of sort of investment excitement although that's really when the the Nvidia phenomenon really it was I would say it was in that period when it was very clear that at the time the vocabulary was more machine learning but
it it was very clear at that time that machine learning was hitting some sort of takeoff point.
Well as you guys you guys have talked about this at length on your on your thing but you know if you really track what happened I think the real story is it was it was the AlexNet uh basically breakthrough in like 2013 that was the that was the real knee in the curve. Um
and then it was obviously the transformer breakthrough in 17.
Um and then everything that followed but but you know look machine learning you know there were you know look uh I mean look I've been working you know I've been working with one of my you know kind of projects working with Facebook since 2004 um on the board since 2007
and of course that you know they they started using machine learning very early um and you know have used it basically you know for like 20 years for you know content you know feed optimization and advertising optimization and obviously many you know
financial services you know many many many companies many different sectors have been doing this. So, it's like one of these things. It's like it's not a sing it's not a single thing. Like it's
it's like it's like layers, right? Um
and and the layers arrive at different paces and but they kind of build up uh they kind of build up over time and then and then yeah and then look in retrospect it was 2017 was kind of the you know the key the key point with transformer and then and then as you
guys know there was this really weird like four-year period where it's like the the transformer existed and then it was just like let's go.
Yeah. Well, but but it was but between 2020 but between 2017 and 2021 I mean that was the era of which like companies like Google had internal chat bots but they weren't letting anybody use them.
Yeah.
Right. And then you know and then OpenAI developed chat GPT or GPT2 and then they told everybody this is way too dangerous to deploy. Right. You know we can't
to deploy. Right. You know we can't possibly let normal people normal people use this thing.
And then you guys I'm sure remember AI dungeon. Um so the only for there was
dungeon. Um so the only for there was like a year where like the only way for a normal person to use GPT3 was in in AI dungeon.
Yeah.
And so you you we would do this. You'd
go in there and you'd pretend to play Dungeons and Dragons. In reality, you're just trying to talk to talk to GPT. And
so there was this, you know, there was this long, you know, you know, the big big companies, you know, big companies are cautious and, you know, the big companies were cautious. It it, by the way, it took Open AI, you know, they they they talked about this. It took
Open AAI time to actually adjust, you know, kind of red redirect their research path. I think uh it was at Rosewood,
path. I think uh it was at Rosewood, right? Uh that the dinner that founded
right? Uh that the dinner that founded OpenAI was right there, right? But that that dinner would have
right? But that that dinner would have taken place in 2018, the formation of OpenAI as late as 2018.
I sorry uh no I'm I'm I'm wrong. It
should be 20 They just celebrated a 10 year anniversary. So it it is 2025.
year anniversary. So it it is 2025.
Yeah. So 2015.
Yeah. 2015. Yeah. 2015.
But then uh um Alec Bradford did GPT1 in what probably 1718 171 18. So it Yeah. For and then and then they didn't really and then GPT3 was what 2020 20
2020 that became co-pilot 21. even
OpenAI which has been you know the leader of this thing in the last decade you know even they had to adapt and and and lean into the new thing and so um yeah I I think it's just this process of basically sort of wave after wave layer
after layer you know building on itself and then you kind of get these catalytic moments where where the whole thing pops and and obviously that's what's happening now is it useful to think about will there be any winter because there's always these patterns like is
this endless summer it's something I constantly think about because do I get do I just like just get endlessly hyped and just trust that I will only be early
and never wrong or or will there be a winter?
So, there's something about say the following. There's something about AI
following. There's something about AI that has led to this repeated pattern.
Um, and and you guys know this, it's summer winter, some winter, summer winter, some winter, and it goes back 80 years. 80 years. Uh, so the original neural network paper was 1943, right? Which is which is amazing uh that
right? Which is which is amazing uh that it was it was far back that long. And
then there was you if you guys have ever talked about this on your show, but there was this uh there was a big uh there was an AGI conference at Dartmouth University in 1955 and they got an NSF grant to uh for the
all the AI experts at the time to spend the summer together and they figured if they had 10 weeks together they could get AGI at the other end and they got their by the way they got the grant they got the 10 weeks and then you know 1955 you know no AGI
and like I said I live through the 80s version of this where there was a big a big boom and a crash and so so there is this thing there is something about AI that causes the people in the field I would say to become um both excessively utopian and excessively apocalyptic. Um
and and it's probably on both sides of like the the boom bus cycle, you kind of see that play out. Having said that, I think what's actually happened is like just in, you know, we now know in retrospect like an enormous amount of technical progress that built up over
time and like for for example, we now know that neural network is the correct architecture. And I will tell you like
architecture. And I will tell you like there was a 60-y year run where that was like a you know, or even 70 years where that was controversial.
And and we now know that that's the case. And so we now, you know,
case. And so we now, you know, everything we're building on today sort of derives from the original idea in 1943. And so so in retrospect, we now
1943. And so so in retrospect, we now know that like these these guys were right. You know, they would get the
right. You know, they would get the timing wrong and they thought, you know, capabilities would arrive faster or there it could be turned into businesses sooner or whatever. But like they were fundamentally the scientists who worked on this over the course of decades were fundamentally correct about what they
were doing and and and the payoff from from all their work is happening now.
And so so the way I think about what's happening is basically I think about basically the the period we're in right now is it's I call it an 80-year overnight success, right? which is like it's an overnight success because it's
like bam, you know, chat GPT hits and then and then 01 hits and then you know openclaw hits and like you know these are open these are these are like overnight like radical overnight transformative successes but they're
drawing on an 80year sort of wellspring backlog you know of of of ideas and thinking. It's not just that it's all
thinking. It's not just that it's all brand new. It's that it's an unlock of
brand new. It's that it's an unlock of all of these decades of like very serious hardcore research um and thinking. I mean look, there were AI
thinking. I mean look, there were AI researchers who spent their entire lives, they got their PhD, they worked for they researched for 40 years, they retired in a lot of cases, they passed away and they never actually saw it work.
Yeah.
So sad.
It is. It is sad. It is sad.
Hinton was like the last guy.
Yeah. Yeah. Well, they were the guys that Alan Newell, I mean, there's tons of John McCarthy, you know, John McCarthy was like one of the inventors of the field. He's one of the guys organized the Dartmouth conference and, you know, he taught at Stanford for 40 years and passed, you know, passed away, I don't know, whatever, 10 10 years ago
or something.
Never never actually got to see it happen. But like it is amazing in
happen. But like it is amazing in retrospect like these guys were incredibly smart and they worked really hard and they were correct. So anyway,
so then it's like okay you know they say history doesn't repeat but it rhymes.
It's like okay does that mean that there's going to be another like you know basically boom buzz cycle and I I will tell you like look like in a sense like yes everything goes through cycles and you know people get overly enthusiastic and overly depressed and there there's a time there's a
timelessness to that. Having said that there's just no question. Um so
the for the four most dangerous words investing are this time is different. Do
you know the 12 most dangerous words of investing? No.
investing? No.
The four most four most dangerous words investing are this time is different. Um
the 12 most dangerous words. And so like I'll tell you what's different. Like now
it's working like like there's just no I mean look there's just no question. And
by the way I'll just give you guys my take. like LLM's like from from
take. like LLM's like from from basically the Chad GPT moment through to spring of 25 I think you could still I think well-intentioned well-informed skeptics could still say oh this is just
pattern completion and oh these things don't really understand what they're doing and you know the hallucination rates are way too high and you know this is going to be great for creative writing and creating you know Shakespearean sonnetss and you know as as rap lyrics or whatever like it's
going to be great at all that stuff but we're not going to be able to harness this to make this relevant in you know coding or in medicine or in are and you know you know kind of feels that you know kind of really really matter and I think basically it was the reasoning
breakthrough it was 01 and then R1 that basically answered that question and basically said oh no we're going to be able to actually turn this into something that's going to work in the real world and and then obviously the coding breakthrough over the over basically the coding breakthrough that kind of catalyzed over the holiday break
was kind of the third step in that like all right if if you know if Lannos Tvolt is saying that the AI coding is now better than he is like like that's that's never happened before that's the benchmark that's never happened before and so now
we know that it's it's going to sweep through coding and and then and then we we know you know we know that if it's going to work in coding it's going to work in everything else right it's just then because that's that's like that's like that's like the hardest in many ways that's the hardest example and now
everything else is going to be a derivative of that and then on top of that we just got the agent breakthrough you know with openclaw which is fantastic which is amazing and incredibly powerful and then we just got the the um the auto research uh you know the the self-improvement you know
we're now into the self-improvement breakthrough and so the so the way I think about it is we've had four fundamental breakthroughs in functionality LLM's reasoning uh agents um and and uh and then now RSI. Um and
they're all actually working. Um and so I'm I'm just as you can tell I'm jumping out of my shoes like this is like this is it. Like this this is the culmination
is it. Like this this is the culmination of 80 years worth of worth of work and this is the time it's becoming real.
Yeah.
Yeah. I I'm completely convinced.
I think the anxiety that people feel is like during the transistor era you had morselaw and it's like all right we understand why these things are getting better. We understand the physics of it.
better. We understand the physics of it.
with AI it's it's so jagged in like the jumps where like like you said it's like in three months you have like this huge jump like and people are like well this can't keep happening right but then it keeps happening it'll keep happening
and so like how do you think about also timelines of like what's worth building I think we always have this question with guests which is like you know should you spend time building harness for a model versus like the next model just going to do it one shot in the
latest space and how does that inform like how you think about the shape of the technology you know you talk about how it's a new computing platform. If you have a
computing platform. If you have a computing platform, then like every six months, it like drastically changes in what it looks like. It's hard to build companies on top of it.
Yeah. So, so a couple things. So, one is like look, the the Moors law was what we now call a scaling law. Like Moors law was a scaling law. And for your younger viewers, Moors law was every chip chip chips either get twice as powerful or twice as cheap every every 18 months.
And that and that and then you know that it's gotten more complicated in the last few years, but like that that was like the 50-year trajectory of of of the computer industry. And then and then by
computer industry. And then and then by the way and that's what took the mainframe computer from a $25 million current dollar thing into you know the phone in your pocket being you know a million times more powerful than that like that you know for for 500 bucks.
And so that was a scaling law and then and then and then key to any scaling law including Moors law and the AI scaling laws is you know they're not really laws right they're they're they're predictions but when they work they become self-fulfilling predictions
because they they they they set a benchmark and and then the entire industry right all the smart people in the industry kind of work to make sure that that actually happens. And so they they kind of motivate the breakthroughs that are required to to keep that going.
And and and in chips that was a 50-y year that was a 50-y year run, right?
And it it was amazing. And it's still happening in in some areas of of chips.
I think the same thing is happening with the the core scaling laws, the core scaling laws in in in AI. you know,
they're they're not really laws, but like they they are basically they're predictions and then they're motivating catalysts for the research work that is required to be and and and by the way, also the investment uh dollars um are you know required to basically keep you
know keep the curves going and and look it's going to be complicated and it's going to be variable and there you know there are going to be walls that are going to look like they're fast approaching and then they're going to be you know engineers are going to get to work and they're going to figure out a way to punch through the walls and obviously that's you know that's been
happening a lot you know and then look there's going to be times when it looks like the walls have you know the the laws have petered out and then they're going to they're going to pick up again and surge And then and then and then it it appears what's happened to the AI is there's now multiple you know multiple
scaling laws. Um there's multiple areas
scaling laws. Um there's multiple areas of improvement and and I think you know I don't know how many more there are already yet to be discovered but there are probably some more that we don't know about yet. You know like for example there's probably some scaling law around um world models and robotics
that we don't fully you know kind of acquisition of data at scale in the real world that we don't fully understand yet. So that that that one will probably
yet. So that that that one will probably kick in at some point here. There's a
bunch of really smart people working on that. Um and so yeah, I I think the
that. Um and so yeah, I I think the expectation is the that you know the the scaling laws generally are going to continue. Yeah, the pace of improvement
continue. Yeah, the pace of improvement will continue to move really fast. Um to
your question on like what to build. So
I'm a complete believer that the scaling laws are going to continue. I'm a
complete believer the capabilities are going to keep getting amazing um you know leaps and bounds. uh the part where I kind of part ways a little bit with what I would describe as the AI purist um you know which is which I would characterize as like the people who are
in many ways the smartest people in the field but also the people who spend their entire life like in a lab um and have have I would say have very little experience in the outside world. Um the
the nuance I would offer is the outside world of 8 billion people and institutions and governments and companies and economic systems and social systems is really complicated. Um
and um and doesn't you know it it 8 billion people making collective decisions on planet earth is not a simple process of like just like you see this happening now it's like a bunch of the AI CEOs have this thing which is just like well there's just this they
just all have this kind of thing when they talk in public where they're just like well there's just these obvious set of things that society needs to do and then they're like societyy's not doing any of those things right and it's like how can society not you know
whatever their theory is how can society not see XYZ and the answer is well society is number one there's no single society it's like eight billion and they like all have a voice and they all have a vote like at the end of the day of how they react to change and then you know
just like it's just human reality is just really complicated and messy and and so the specific answer to your question is like as usual it depends um you know it depends look there's no question people are going to like
there's no question there are going to be companies it's already happening there are companies that think that they're building value on top of the models and they're just going to get blitzed by by the next model there's no question that's happening but I think there's no question also that just the
process of adaptation of any technology into the real into the real messy world of humanity is is just going to be messy and complicated. It's it's not going to
and complicated. It's it's not going to be simple and straightforward. It's
going to be messy and complicated and there are going to be a lot of companies and a lot of products um and in in fact entire industries that are going to get built that to basically actually help all of this technology actually reach
real people. The amount of capital going
real people. The amount of capital going into these companies, I mean Dario talked about it on the Dorcash podcast and Dor Cash was like, "Why don't you just buy 10x more GPUs?" and he's like because I'm going to go bankrupt if the model doesn't exactly hit the the
performance level.
How do you think about that also as a risk on you know you guys are investors in OpenAI and thinking machines and world apps it seems like we're leveraging the scaling loss at a pretty
high rate like how comfortable I guess do you feel with the downside scenario like and say like things peter out you think you can kind of like restructure uh these buildouts and uh you know capital investments.
Yeah. Yeah. So, I should start by saying so I live through the.com crash. Um, and
I can tell you stories for hours about the do crash and it was horrible. No, it
was awful. It was it was it was apocalyptic. By the way, the a lot of
apocalyptic. By the way, the a lot of the dot crash was actually at the time it was actually a telecom crash. It was
a bandwidth crash. Like the the thing that actually crashed that wiped out all the money was the the telecom companies.
Global crossing.
Global global Yeah.
I'm from Singapore and they they laid so much cable over over our oceans.
Actually, there it was a scaling law in the.com era and it was literally the the US commerce department put out a report in 1996 and they said internet traffic was doubling every quarter. Um and and actually in 1995 and 1996 internet
traffic actually did double every quarter. And so that became the scaling
quarter. And so that became the scaling law. So what all these telecom
law. So what all these telecom entrepreneurs did was they went out and they raised money to build fiber anticipating that the demand for bandwidth was going to keep doubling every quarter. Doubling every quarter
every quarter. Doubling every quarter though is like you know grains of chess on the chessboard like at some point the numbers become extremely large right and and and it really and really what happened was the internet the internet by the way continuously kept growing
basically since inception it's you know it's it's continuously grown it's never shrunk and it's grown really fast compared to anything else you know in human history but it wasn't doubling every quarter as of 1998 1999 and so there was this gap
in the expectation of what they thought was a scaling law versus reality and that's actually what caused the dot crash which was it they they way over companies like global crossing way overbuilt fiber which is sort of the by
the way fiber telecom equipment you know so all the all the networking gear you know and then and then by the way the actual physical data centers like that was the beginning of the of the of the data center build and then and then data center overbuild and so you had that but
it was it was literally I think it was like $2 trillion got wiped out right it was like it was like a big it was and by the way the other the other subtlety in it was the internet companies themselves
never really had any debt because tech tech companies generally don't run on debt but the telecom companies run on debt physical infrastructure companies run on debt And so the company's like, well, we're crossing not just raised a lot of equity. They also raised a lot of debt. So they're highly levered. And so
debt. So they're highly levered. And so
then you just do the thing of just like, okay, you have a highly levered thing where you're you're just you're overbuilding capacity. Demand is
overbuilding capacity. Demand is growing, but not as fast as you hoped.
And then boom, bankrupt. Right. And and
then and then it's like they say about the hotel industry, which is it's always the third owner of a hotel that makes money, right? It has to go bankrupt
money, right? It has to go bankrupt twice, right? You have to wash out all
twice, right? You have to wash out all of the overoptimistic exuberance before it gets to actually a stable state and then it makes money. So by the way all of those data centers and all of those all the fiber that it's all in use today
but 25 years later but it it took and actually the elapse time was it took 15 years. It took 15 years from 2000 to
years. It took 15 years from 2000 to 2015 to actually f fill up all that capacity. The cautionary warning is the
capacity. The cautionary warning is the overbuild can happen. Um and and and and you know, you get into this thing where basically everybody everybody who basically has any sort of institutional capital is like, "Wow, it's just I I don't know how to invest in these crazy
software things, but for sure I can put build data centers and for sure I can buy GPUs and I can deploy, you know, compute grids and and all these things."
Um and and so, you know, if you're a pessimist, you could look at this and you could say, "Wow, this is like really set up to be able to basically replicate, you know, what we went through what we went through in 2000."
Obviously, that would be bad. The
counterargument which is the one I I agree with which is the counter on the other side is a couple things. One is
the companies that are investing all the the companies that are investing the money are like the bluest chip of companies. And so back back in the in
companies. And so back back in the in the doc like global crossing was like a it was like an entrepreneur. It's like a new venture. But like the money that's
new venture. But like the money that's being deployed now at scale is Microsoft and you know and Amazon and Google right and Facebook and Nvidia and you know these the these and and now you know by the way Open AI anthropic which are now
at like you know really serious size um you know as companies with you know very serious revenue.
These are very large scale companies with like lots lots of cash lots of debt capacity that they they've never used.
And so this is institutional in a way that that really wasn't at the time. And
then the other is at least for now every dollar that's being put into anything that results in a running GPU is being turned into revenue right away like so and you guys know this like everybody starve for capacity everybody starve for compute capacity and then you know all
the associated things memory and and and interconnect and everything else um data center space and so every dollar right now that's being put in the ground is turning into revenue and and in fact I actually think there's an interesting thing happening which is because
everybody starve for capacity the models that we actually have that we can use today are inferior versions of what we would have if not for the supply constraints.
Um if right suppose a hypothetical universe in which GPUs were 10 times cheaper and 10 times more plentiful, the models would be much better because you would just allocate a lot more money to training and you'd just build better models and they would be better.
Um and so we're actually getting the sandbag version of the technology.
Yeah. No, everything we use is quantized because the the labs have to keep the the full versions, right? Like
right? Like we're not even getting the good stuff.
Yeah. But but getting the good stuff is it's just even if technical progress stops once there's like a much bigger build of like GPU manufacturing capacity and memory you know all all the things that have to happen in the course of the
next 5 or 10 years once it happens even the current technology is going to get going to get much better and then as you know like there's just like a million ways to use this stuff like there's just like a million use cases for this like it you know this isn't just sending packets across a thing whatever and
hoping that people find something to do with it. This is just like oh we apply
with it. This is just like oh we apply intelligence into every domain of human activity and then it works like incredibly well. Um, here's what I know.
incredibly well. Um, here's what I know.
Here's what I know. Um, in the next 3 or 4 year, it's like somewhere between 3 or 4 years out, basically everything is selling out. So, like the entire supply
selling out. So, like the entire supply chain is is is sold out or selling out.
And so, there there's no like we're just going to have like chronic supply shortage for, you know, for years to come. Um, there's going to be a response
come. Um, there's going to be a response from the market that's going to result in an enormous, you know, it's happening now. An enormous flood of investment in
now. An enormous flood of investment in a new fab capacity and, you know, everything else to be able to do that.
some point the supply chain constraints will unlock you know at least to some degree that will be another accelerant to industry growth when that happens because the products will get better and everything will get cheaper and so so I know that's going to happen. I know that
you know the deployments you know the actual use cases are like really compelling and then like I said you know with reasoning and agents and so forth like I know they're just going to get like much much better from here and so I I know the capabilities are like really
real and serious. I also know that the technical progress is not going to stop.
It it is accel is is accelerating like the breakthroughs are are tremendous. I
mean, even just month over month, the breakthroughs are really dramatic. And
so, you know, I think if you were a cynic, and there there are cynics, you can look at 2000, you can find echoes, but I can't even imagine betting that this is going to like somehow disappoint in, you know, at least for years to come. I think it would be essentially
come. I think it would be essentially suicidal to make that bet.
Um, it was Michael Bur. Uh,
that's an interesting We'll pick on a guy. We'll pick Let's pick on one guy. We'll pick Well, because he did he came out with it. Was
it was He doesn't mind.
It was the Nvidia short, right? Came out
with the Nvidia short. And then you guys probably talked about this, but just the the analysis now that the current models are getting better faster at such a rate that if you are running an NVIDI if you're running an NVIDIA inference chip
today that's 3 years old, you're making more money on it today than you did 3 years ago because the pace of improvement of the software is is faster than the than the depreciation cycle of the chip. And then
my understanding is Google is running I don't think I don't know exactly what these are rumors that I've heard or maybe it's public but um I think Google's running very old TPUs very and very profitably. Um, and so, so it
actually turns out, as far as I can tell, it's actually the opposite of the Bur thesis is actually, he was actually 180 degrees wrong. It's actually the the the old Nvidia chips are getting more valuable, which is something that's like literally
never happened before. Like it's never been the case that you have an older model chip that becomes more valuable, not less valuable. And and again, that's an expression of the just ferocious pace of software progress, ferocious pace of
capability payoff that you're getting on the other side of this. And so I just the idea of betting against that like Yeah. Yeah. One of my like an invitation
Yeah. Yeah. One of my like an invitation to get your face ripped off.
One of my early hits was like modeling the lifespan of the H100 and H200s and and going like you know usually they adise like four to seven years and it was you know maybe you sort of realistically care cut it down to two to
three but actually it's going up and not down and and uh that's I mean that's I think that's the dream. uh we are finding utilization and I think utilization solves all problems like you can you can find use use cases for even
like the poor like even memory we're having a shortage right and and even like the the shittier versions of of memory that we do have we are finding use cases for it so like that's great how how important is open source AI and
kind of like edge inference in a world in which you have three years of supply crunch like do you think in the like you know if you fast forward like five years like how do you think about inference uh in the data center versus at the edge
well so just to start yeah So I think I think open source is very important for a bunch of reasons. I think edge edge inference is very important for a bunch of reasons. I I think just practically
of reasons. I I think just practically speaking if we're just going to have fundamental constru crunches for the next I mean you guys know if you just project forward demand over the next three years relative to supply one of the dismaying predictions you can do is
what's going to what's going to happen to the cost of of inference in the core over the next three years and like it may rise dramatically right like so so what is and then as you know like the the big model companies are subsidizing heavily right now right and so so what's the
what will be the average person's you know per day per month token cost you know three years from now to do all the things that they want to do And I I don't know what it's going to make. I
mean, I have you guys probably have friends I have friends today who are paying $1,000 a day for OpenClaw for claw tokens to run OpenClaw, right? And so, okay, $30,000 a month,
right? And so, okay, $30,000 a month, right? And and by the way, those friends
right? And and by the way, those friends have like a thousand more ideas of the things that they want their claw to do, right? And so, you could imagine there
right? And so, you could imagine there there's like latent demand of up to, I don't know, five or $10,000 a day of of tokens for a fully deployed, you know, p personal agent. And obviously, consumers
personal agent. And obviously, consumers can't pay that, right? And so, so but it gives you a sense of the of the f of the future scope of demand, right? And so so even even if there's a 10x improvement in price performance that still you know goes to $100 a day which is still way
beyond what people can pay.
So there's just going to be like ferocious demand. By the way the agent
ferocious demand. By the way the agent thing the other interesting thing is I think the agent thing so up until now a lot of the constraints have GPU constraints. I think the agent thing now
constraints. I think the agent thing now also translates into CPU constraints CPU and memory. Yes.
CPU and memory. Right. And so like the entire chip ecosystem is just going to get wait for network constraints. That would
be the killer.
It's all bottlenecked potentially for years. And so so I I think that Brad and
years. And so so I I think that Brad and I think it's actually possible. I mean
generally inference costs are going to keep coming down but I think the let's put it this way the rate of decline I think may level out here for a bit because of these supply constraints and then at some point maybe the lab stops subsidizing so much and that that that again will be be an issue and so there's
just going to be so much more demand for inference than than can be satisfied um you know kind of with the centralized model and then and then you you guys know this but like all the just the dramatic I mean just the dramatic innovations that have happened in the Apple silicon to be able to do uh
inference is is quite amazing the level of effort being put like the open source guys are putting incredible effort into getting you know this recurring pattern where the big model will never run on a PC and then 6 months later it runs on a
PC, right? It's like amazing and there's
PC, right? It's like amazing and there's very smart people working on that. So
there's all that and then look there's also you know there's also like other there's other motivators there's other motivators which is just like okay how much trust are the big centralized model providers you know how much trust are they building in the market versus you
know how much are you know at least for in certain cases with some people for certain use cases people being like well I'm not willing to just like turn everything over.
So there there there's all the trust issues. Um, by the way, there's also
issues. Um, by the way, there's also just like straight up price optimization. There's many uses of AI
optimization. There's many uses of AI where you don't need Einstein in the cloud. You just need like a a smart
cloud. You just need like a a smart local model. There's also performance
local model. There's also performance issues where you want to, you know, you want, you know, you're going to want your doororknob to have an AI model in it, you know, to be able to, you know, do um, you know, to be able to do access control. Um, obviously, like everything
control. Um, obviously, like everything with a chip is going to have an AI model in it. And a lot of those are going to
in it. And a lot of those are going to be local.
Um, and so yeah, no, like I think I think you're going to have t and then you're by the way, also wearable devices, you know, you don't want to do a complete round trip. you want, you know, your whatever your smart devices are, you want it to be like super low latency.
The question, do we care who makes it?
One of the biggest news this week was the collapse of AI2, the Allen Institute, one of the actual American open source model labs.
Um, and I'm not that optimistic on on American open source. Like you guys invested in Mistral and Mrol is doing extremely well outside of China. That's
about it.
Yeah, we'll see. We'll see. I look,
number one, I do think we care. I do
think we I do think we care who makes it. Um I would say this the the the
it. Um I would say this the the the previous presidential administration wanted to kill it in the US. Like they
wanted to drown in the bathtub. Um and
so they wanted to kill it. So at least we have a government now that actually like actually wants it wants it to happen.
And you council.
Yes. And the new and the Past. Yeah. So
that you know this admin for whatever other political issues people have which are many you know this administration has I think a very enlightened view and in particular an enlightened view on AI and in particular on open source AI. Uh
and so they're very supportive. Um my
read is the chi the Chinese have a very the various Chinese companies have a very specific reason to do open source which is that they they don't fundamentally they don't think they can sell commercial AI outside of China right now or at least specifically not
not in the US for a combination of reasons and so they they kind of view I think open source AI as a bit of a loss leader against basically domestic uh you know paid paid services and then kind of you know kind of an ancillary products you know they're they're very excited
about it by the way I think it's great I think it's great that they're doing it um you know I think DeepS was like a gift to the world um I The great thing about open source open source the the the impact of open source is felt two ways. One is you you get the software
ways. One is you you get the software for free but the other is you get to learn how it works right and so like the paper the paper the paper and and the code right and the code and so like for example I thought this was amazing
so open comes out with 01 and it's an amazing technical breakthrough and it's just like absolutely fantastic but of course they don't explain how it works in detail and then of course they hide the they hide the reasoning traces right and and then and then and then everybody's like okay this is great but
like who's going to be able to replicate this? Are other people going to be able
this? Are other people going to be able to do this? You know, is there secret sauce in there? And then our one comes out and it's just like there's the code and there's the paper and now the whole world knows how to do it. And then, you know, 3 months later, every other AI model is is adding reasoning. And so, so
you get this kind of double like even if the Chinese models themselves are not the models that get used, the education that's taken place to the rest of the world, the information diffusion, you know, is incredibly powerful. So, that
happens and then I don't know, we'll we'll see, you know, there are a bunch of American, you know, open source, you know, AI model companies. I mean, look, there's going to be tremendous, you know, there already is. There's, you
know, there's going to be tre, there's tremendous competition, uh, among the primary model companies. You know,
there's, depending on how you count, there's like four or five, you know, big co- model companies now that are, you know, kind of neck andneck in different ways. Um, uh, you know, and and, um, you
ways. Um, uh, you know, and and, um, you know, and then obviously both Boax and then Metaware involved, you know, both have huge, you know, huge attempts to, you know, kind to kind of leave frog underway. And then you've got, you know,
underway. And then you've got, you know, a whole fleet of startups, new companies, including a whole bunch that we're backing that are, you know, trying to come out with different approaches.
And then you've got whatever it is. I
don't know how how many how many like mainline foundation model companies are there in China at this point. It's
probably six.
It's five tigers is what they call it.
Uh Quinn is in questionable because there's change in leadership, right? Yeah.
right? Yeah.
But that does that include that includes like Moonshot, Deepseek, uh uh Zai, um Quen 01 is in there, right?
And then um Bite Dance and Bite Dance would be like the next tier.
They weren't as prominent. They won't
have a Yeah. But at least you know C seed dance is very inspiring and presumably they have more stuff coming and Tencent probably has more stuff coming and so forth and so so like look here would be a thing you could anticipate which is there are not these markets there are not going to be
between the US and China right now there's like a dozen primary financial model companies that are like at scale at some level of like critical mass it's not going to be a dozen in three years right like just because these industries don't bear a dozen it's going to be three you know there's going to be three
or four big winners or maybe one or two big winners and so there's going to be like a whole bunch of those guys that are going to have to figure out alternate strategies um and I think like open source is one of those strategies.
And so I I think you could see like a whole I I think the questions like who's going to do open source I think that could change really fast. I I think that that that's a very dynamic thing. I
think it's very hard to predict what happens and and I think it's very important.
Nvidia is doing a lot. you.
Well, I was gonna say well exactly and then you got Nvidia and then and then you know just again indust there's an old thing in business strategy which is called commoditiz the compliment and it's right and so if you're Jensen it's just kind of obvious of course you want to commoditize the software and he's and
to his enormous credit he's putting enormous resources behind that and so maybe maybe it's literally Nvidia and I think that would be great.
Yeah. Uh narrative violation two European projects uh in the uh damn I'm hosting my uh Europe uh conference soon and I got both of them. They got
us. They got us.
Wait a minute. Where was Peter? So,
where was Steinberger when he did open?
Yeah. Yeah,
he was in Vienna. Oh, he was in Vienna.
And then where is he now?
Uh, he's moving to Okay. Okay. All right. Okay. There we
Okay. Okay. All right. Okay. There we
go. And then Yeah. The pie guy, right?
The pie guys are European.
Their buddies in Australia.
Mario is also there. Yeah.
Right. And are they Yeah, they haven't announced yet any sort of change changed or have they?
No, they're they have a company there.
Okay. Okay. Good. Yeah. Good. Um,
yeah. Good. Anyways, I think pi and openclaw are very important software things and and I just wanted you to just go off on what you think.
Yeah. So I think in the combination of the two of them I think is one of the 10 most important software.
Open claw got all the attention but talk about pi pi's kind of the end. Yeah pi
pi is kind of the architectural breakthrough for those of us who are older. There was this whole thing that
older. There was this whole thing that was very important in the world of software basically from like 1970 to I don't know it still is very important but like 19 from 1970 through to like basically the creation of Linux which was basically this this thing used to
call like the Unix mindset like so so because there were all these different you know theories there all these different operating systems and mainframes and and then you know all these windows and Mac and all these things and then there was this but kind of behind it all was this idea of kind
of the Unix mindset and the Unix mindset was this thing where basically you don't have these like like in the old days like like the operating system that like made the computer industry really work like in the 1960s was this thing called OS 360 which was this big operating
system that IBM developed that was supposed to basically run everything and it was this like giant monolithic architecture in the sky. It was like a you know it was like a giant castle um of software and and by the way it worked really well and they were very successful with it but like it was this
huge castle in the sky but it was this thing it was almost unapproachable which is like you had to be kind of inside IBM or very close to IBM and you had to really understand every aspect how the system worked and then the Unix guys originally out of AT&T and then out of
out of Berkeley um you know came out and they said no let's have a completely different architecture and the way architecture is going to work is we're going to have we're going to have a prompt and a and a shell and then and then we're going all the functionality is going to be in the form of these discrete modules and then you're going
to be able to chain the modules together and so the it's almost like the operating it operating system itself is going to be a programming language. Um
and then that le led to the the the sort of centrality of the shell. Um and then that led to sort of you know basically chaining together Unix tools and then that led to the emergence of these these scripting languages like Pearl where you could basically kind of very easily do
this and then the shells got more sophisticated and then and then and then look like you know that that number one that worked and that that was the world I grew up in like I was I was a Unix guy you know sort of from call it 1988 to
you know kind of all the way through my work and it worked really well. it's in
the background. Um, you know, nor normal people don't need to didn't need to necessarily know about it, but like if you were doing like system architecture, application development, you you knew all about it. Um, and then, you know, it's been in the background ever since.
And, you know, look, your Mac still has a Unix shell, you know, kind of in there and your iPhone still has a Unix shell kind of buried in there somewhere. So,
they're kind of in there. And then, you know, the Windows shell is kind of a, you know, sort of a weird derivative of that. But, um, you know, but look, the
that. But, um, you know, but look, the the internet runs on Unix. Um, and then smartphones. Actually, both iOS and
smartphones. Actually, both iOS and Android are Unix derivatives. And so you know kind of Unix did end up winning but but anyway and then we just started taking that for granted and then and then so so basically the way I think about what happened with pi and then with openclaw is basically what those
guys figured out is I always say the great breakthroughs are obvious in retrospect right which is the best kind the best kind they weren't obvious at the time or somebody else would have done them already. Um and so there is a like a real conceptual leap but then you
look at it sort of the backwards looking and you're just like oh of course like to me those are always the best breakthroughs. Well actually language
breakthroughs. Well actually language models themselves are like that. It's
just like, oh, next token completion.
Oh, of course.
Yeah. What other objective mattered?
Yeah, exactly. But but like it right.
But she's even saying it wasn't obvious until somebody actually did it, right?
And so the conceptual breakthrough is real and deep and powerful and very important. And so the way I think about
important. And so the way I think about pi and openclaw is it's basically marrying the the language model mindset to the to the Unix basically shell prompt mindset. And so it's it's
prompt mindset. And so it's it's basically this idea that what what so what is an agent, right? And and as you know like many smart people have been trying to figure out what an agent is for for for decades and they've had many architectures to build agents in the whole thing. And it turns out what is an
whole thing. And it turns out what is an agent? So it turns out what we now know
agent? So it turns out what we now know is an agent is the following. It's so
it's a language model. And then above that it's a bash it's a bash shell. So
it's it's a Unix shell and then it's and then the agent has access has access to to the shell in you know hopeull hope hopefully in a sandbox maybe maybe in a sandbox. So it's it's the model um it's
sandbox. So it's it's the model um it's the shell um and then it's a it's a file system. Um and then the state is stored
system. Um and then the state is stored in files and then you know there's the markdown format for the you know for for the files themselves and then and then there's basically what in Unix is called a crown job. There's a loop and then there's a heartbeat for there's heartbeat and and the thing basically
wakes up wakes up.
So it's basically LLM plus shell plus file system plus markdown plus cron and it turns out that's an agent and and and every part of that other than the model is something that we already completely know and understand. And in fact it turns out that like the latent power of
the Unix shell is like extraordinary because basically like all like there's just like there's just enormous latent power in the shell. There's enormous
numbers of Unix commands. There's
enormous number of command line interfaces into all kinds of things already in the you know your entire I mean your entire just to start with your computer runs in a shell if you're running a Mac or or a phone your computer your computer's running on a shell uh already and so like the full
power of your computer is available at the command line level um and then it turns out it's really easy to expose other functions as a command line interface and so like this whole idea where we need like MCP and these like pro fancy protocols whatever it's like
no we don't we just need like a command command line thing so that's the architecture and then it turns out what is your agent your agent is a bunch of files stored in a file system. And then
there's the thing that just like completely blew my mind when I wrote my head around it as a result of this, which is like, okay, this means your agent is now actually independent of the model that it's running on because you can actually swap out a different LLM underneath your agent and your your
agent will change personality somewhat because the model is different, but all of the state stored in the files will be retained, different instruction set, but you just compiled it, right? Exactly. And it's all right. It's
right? Exactly. And it's all right. It's
like, right, swapping out a ship and recompiling, but it's it's still it's still your agent with all of its memories um and with all of its capabilities. And then by the way you
capabilities. And then by the way you can also swap out the shell. Uh so you can move it to a different execution environment that is also is also a bash shell. By the way you can also switch
shell. By the way you can also switch out the file system right uh and you can and you can and you can swap out the the the heartbeat the crown framework the loop the agent framework itself. And so
your agent basically is basically at the end of the day it's just it's just its files. Um and then and then there's of
files. Um and then and then there's of course yeah it's it's basically it's just the files. Um and then by the way as a consequence of that the agent it and then the agent itself it turns out a couple important things. So one is it it's it can migrate itself, right? And
so you're you can instruct your agent migrate yourself to a different uh runtime environment, migrate yourself to a different file system, migrate yourself to a different you swap out the language model. Your agent will do all
language model. Your agent will do all that stuff for you. And then there's the final thing which is just amazing which is the agent is the agent actually has full introspection. It actually it
full introspection. It actually it actually knows about its own files and it can rewrite its own files, right? which by the way is basically no
right? which by the way is basically no widely deployed software system in history where the the the thing that you're using actually has full introspective knowledge of how it itself works and is able to modify itself like that that I mean there have been toy systems that have had that but there
there's never been a widely deployed system that has that capability and then that leads you to the capability that just like completely blew my mind when I wrapped my head around it which is you can tell the agent to add new functions and features to itself and it can do that
extend yourself right extend yourself like extend yourself give yourself a new capability right and so and so literally it's It's like you run into somebody at a party and they're like, "Oh, I have my open claw do whatever. Connect to my eight sleep bed and it gives me better advice
and sleep." And you go home at night and
and sleep." And you go home at night and you tell your claw or if they're at the party, by the way, you tell your claw, "Oh, add this capability to yourself and your claw will say, "Oh, okay, no problem." And it'll go out on the
problem." And it'll go out on the internet and it'll figure out whatever it needs and then it'll go out to cloud code or whatever. It'll write whatever it needs and then the next thing you know, it has this new capability and so you don't even have to like you can have it upgrade itself without even having to
without having to do anything other than tell it that you want it to do that. And
so anyway, so the combination of all this is just I mean this is just like a massive incredible I mean it's just incredible. Like if I if I were if I
incredible. Like if I if I were if I were 18 like this is 100 this is what I would be spending all of my time on.
This is like such an incredible conceptual breakthrough. And again
conceptual breakthrough. And again people are going to look at it and they already get this response. People are
going to look at it and they're going to say oh where's the breakthrough because these the all of these components were already known before. But but this is the key the key to the breakthrough was by using all these components that were known before you get all of the underlying capability that's buried in
there. And so all and so for example,
there. And so all and so for example, computer use all of a sudden just kind of falls trivial trivial. Of course,
it's going to be able to use your computer. It has full access to the
computer. It has full access to the shell, right? And then and then you just
shell, right? And then and then you just you give it access to a browser and then you've got the computer and the browser and off and away it goes. And and then you've got all the abilities of the browser also. Um and so and so the
browser also. Um and so and so the capability unlock here is profound. My
friends who are, you know, deepest into this are having their claw do like like literally like a thousand things in their lives. They have new ideas every
their lives. They have new ideas every day. They're just like constantly
day. They're just like constantly throwing new challenges at the thing.
And by the way, it's early and you know, these are, you know, these are prototypes and there's, you know, as you guys know, there's security issues and and so, you know, there's a bunch of stuff to be ironed out, but the the
unlock of capability is just incredible.
And I I have absolutely no doubt that everybody in the world is going to is going to have at least, you know, an agent like this, if not an entire family of agents, and we're going to be living in a world where I think it's almost inevitable now that this is the way people are going to use computers.
I was going to say for someone who is deeply familiar with social networks, the next step is your claw talking to my claw. posting on claw Facebook uh
claw. posting on claw Facebook uh posting their jobs on claw LinkedIn and post posting their tweets on claw xi or whatever you know um I do think that that is how uh you know we we get into
some danger there in terms of like alignment and whether or not we want these things to to to run you guys know rent reentum.com yeah rent yeah
I mean it's fiverr it's task rabbit mechanical turk but flipped right the agent hiring the people which of course it's going to happen right it's obviously going to I'm curious if you have any thoughts on the engineering
side. So when you build the browser, the
side. So when you build the browser, the internet, you know, just a bunch of mostly plain text file plus some images and today the every website and app is like so complex and like somehow, you
know, the browser kept evolving to fit that in. Are there any design choices
that in. Are there any design choices that were made like early in the browser and kind of like the internet and the protocols that you're seeing agents similar to today's like hey this thing is just not going to work for like this
type of new compute and we should just rip it out right now.
There were a whole bunch but I'll give you a couple. So one is um and we didn't you know to be clear like this this was not you know this was totally different.
We didn't have the capabilities we have today but because we have we didn't have the language models underneath this but um we did have this idea that human readability actually mattered a great deal. Um and and so and specifically in
deal. Um and and so and specifically in those days it was it was not so much English language but it was there there was a design decision to be made between binary protocols and text protocols. And
basically every every every basically old school systems architect that had grown up between like the 1960s and the 1990s basically said you know the internet is what do you know about the internet? It's star for bandwidth. You
internet? It's star for bandwidth. You
you just you have these very narrow straws. You know look people when we did
straws. You know look people when we did the work on mosaic like people who had the internet at home had a 14 kilobit modem right? So you're you're trying to
modem right? So you're you're trying to like hyper optimize every bit of data that that travels over the network. And
so obviously if you're going to design a protocol like HTTP, you're going to want it to be binary, you know, highly compressed binary protocol for maximum efficiency. And you're going to want to
efficiency. And you're going to want to have it be like a single connection that persists and you're the last thing you're going to want to do is like bring up and tear down new connections. And
you definitely you're not not going to want a text protocol. And so of course we said no, we actually want to go completely the other direction. It's
obviously we only want text protocols. U
by the way, same thing in HTML itself.
We want HTML to be relatively verbose.
you know, we want the tags to actually be like human readable. Um, we want to use the most inefficient things possible.
Yeah. We want to do the do the do the inefficient things.
You're the original token maxer.
Yeah. Exactly. Yeah. Yeah. Yeah.
Basically, build Well, yeah. Well, actually, this was
Well, yeah. Well, actually, this was this was actually the the conscious thing which basically says just like assume assume a future of infinite infinite bandwidth, build for that. And
then basically what it was is it was a bet that it was a bet that if the system was if the if the latent capabilities of the system were powerful enough and that was obvious enough to people that would create the demand for the bandwidth that would cause the supply of bandwidth to get built that would actually make the
whole thing work. And then specifically what we wanted was we wanted everything to be human readable because we at the engineering level we wanted people to be able to read the protocol coming over the wire and be able to understand it with their with their bare eyes without having to like disassemble it or
whatever, right? Have it converted out
whatever, right? Have it converted out of binary, right? And so the the the all the pro, you know, HTTP and everything else were were it was always text protocols. And the same thing with HTML
protocols. And the same thing with HTML and and in many ways some people say that the key breakthrough in the browser was the view source option um which is every web page you go to you could view source which means you could see how it worked which means you could teach
yourself how to build right new uh to to build new web pages there was that so human readability um and again human readability in those days still meant technical you know specs you know now it means English language but that there's
an incredible latent power in giving everybody who uses the system the option to be able to drop down and actually understand how it's working and that worked really well for the web and I think it's working really well for AI That was one. Um what was the other? Um
a big part of the idea of web servers was to actually surface the underlying latent capability of the operating system and to be able to surface the also the underlying latent capability of the database because basically what was a web server? What what what is a web server fundamentally architecturally?
It's it's it's the operating system. So
it's it's the operating system's ability to you know it's running on top of an OS. So it's the OS's ability to manage
OS. So it's the OS's ability to manage the file system and do everything else that you want to do and process everything. Um and then of course a lot
everything. Um and then of course a lot of early you know lot of websites are front ends to databases. Um and so you wanted to you wanted to unleash the underlying latent power of whether it was an Oracle database or some other you know some other Postgress or whatever
whatever it was. Um and so a lot of the function of the web server was to just bridge from that internet connection coming in to be able to unlock the underlying power of the OS and the database. Uh and again people looked at
database. Uh and again people looked at it at the time and they were like well is this really does this really matter?
Like is this important because we've had databases forever and we've always had you know user interfaces for databases and this is just another user interface for a database. It's like okay yeah fair enough. But on the other side of that,
enough. But on the other side of that, it's just like this is now a much better interface to databases and one that eight billion people are going to use and is going to be like far easier to use and far more flexible and and and and you're not just going to have old
databases. Now you have a system where
databases. Now you have a system where people can actually understand why they want to build, you know, a million times more database apps than they had in the past. And then the number of databases
past. And then the number of databases in the world exploded. And so again, this goes to this thing of like building building in layers. Some of the smartest people in the industry look at any new challenge and they're like, "Okay, I'm I need to build a new kind of application.
So the first thing I need to do is build a new programming language, right?
Right? And then the next thing I need to do is build a new operating system, right? And then the next thing I need to
right? And then the next thing I need to do is I need to build a new chip, right?
And they they kind of want to reinvent everything. And I've I've always had
everything. And I've I've always had maybe it's just I don't know pragmatic mentality or something or maybe an engineering over science mentality, but it's more like no, you have just like all of this latent power uh in the existing systems and you don't want to
be held back by their constraints, but what you want to do is you want to kind of liberate that power and open it up.
Yeah.
And so I I think I think and I think the web did that for those reasons and I think it's the same thing now that's happening.
It's a great perspective in the web.
The programming languages is another good thing. And we have Brett Sailor on
good thing. And we have Brett Sailor on the podcast and we were talking about Rust and you know Rust is memory safe by default. So why are we teaching the
default. So why are we teaching the model to not write memory unsafe code just use Rust and then you get it for free. How much do you think there's like
free. How much do you think there's like time to be spent like recreating some of these things instead of taking them for granted or be like oh okay Python it's kind of slow on TypeScript you know it's like yeah as as imperfect as they are they are the
lingua frana.
I mean I think this is going to change a lot because I don't think the models care what language they program in. And
I think they're going to be good at programming in every language. And I
think they're going to be good at translating from any language to any other language. Like, okay, so this gets
other language. Like, okay, so this gets into the coding side of things. I I
think we're going through a really fundamental change. And I look, I I grew
fundamental change. And I look, I I grew up, you know, I grew up handcode, you know, I grew up hand coding. Everything
I did was actually everything I did actually was written in C. I was
back in the days I wasn't even using C++ so or like Java or any of this stuff, right? Uh and so um everything everything I ever did, I was like managing my own memory at the level of C. And then I, you know, I'm
still from the generation that, you know, I knew assembly language and, you know, I I, you know, um, so I I could drop down and do things uh, right on the ship. And so we we've just we've all all
ship. And so we we've just we've all all of us we've always lived in a world in which software is like this precious thing that like you have to think about very carefully and it's like really hard to generate good software and there's only a small number of people who can do it and like you have to be very like
jealous in terms of thinking about like how do you allocate like what are your engineers working on and how many good engineers do you actually have and how much software can they write and how can how much software can human beings you know kind of maintain and I think like all those assumptions
are being shot right out the window right now like I think they're I I think those days are just over and I think the new world is like actually high quality software is just infinitely available and if you need new software to do XYZ
like you're just gonna wave your hand and you're gonna get it and then if it's if you don't like the language it's written in you just tell the thing all right I want the now I want the rush version um or you know security you know sec we're about to by the way we're about to go through computer security is
about to go through the most dramatic change ever which is number one like every single latent security bug is about to be exposed right so we're going to have like the we're we're set up here for like the computer security apocalypse for a while but but on the other side of it now we have
coding agents that can go in and actually fix all the security bugs and so How how are you going to secure software in the future? You're going to tell the tell the bot to secure it and it's going to go through and and fix it all. And so so this thing that was this
all. And so so this thing that was this incredibly scarce resource of high quality software is just going to become a completely fungeible thing that you're just going to have as much as you want, right? And and that has like you know
right? And and that has like you know that has like tons and tons of consequences.
In some sense, the answer to the question that you posed I think is just somewhat I don't know simple or something or straightforward which is just if you want all your software in rest, you just tell the bot you want all your software in rest. like things that used to be like hard or even like seem
like an insurmountable mountain to get to get through all of a sudden I think become very easy.
I think Brett had a theory that there would be a more optimal language for LMS and so the contention is uh there isn't like just don't bother just whatever humans already use LLMs are perfectly capable porting.
I think we're pretty close to being I don't know if this would work today. I
think we're pretty close to being able to ask the AI what would its opt optimal language be and let and let it design it.
True. Okay. Here's a question. Are you
even going to have programming languages in the future? Or a just going to be emitting binaries? Let's assume for a
emitting binaries? Let's assume for a moment that humans aren't coding anymore. Let's assume it's all bots. The
anymore. Let's assume it's all bots. The
what levels of intermediate abstraction do the bots even need?
Yeah.
Or are they just coding binary directly?
Did you see there's actually an experiment there? Somebody just did this
experiment there? Somebody just did this thing where they have a they have a a language model now that actually emits model weights for a new language model, right? And so will the bots predict the
right? And so will the bots predict the weights?
Yeah. Will the bots literally be emitting not just coding binaries but will they will will they actually be admitting weights for for new models direct directly and conceptually there's no reason why they can't do both
of those things like architecturally both of those things seem completely possible very inefficient you're basically inefficient simulation of a simulation in a simulation inside of weights yeah
very inefficient but like looks are already like incredibly inefficient ask I have a favorite thing ask 2 plus 2 equals four right it's just like you know it's like you know it's it's like whatever billions and billions of times more inefficient than using your pocket
calculator.
But but but yet the payoff is so great of the general capability. And so
anyway, like I I kind of think in 10 years like I'm not sure yeah like I'm not sure there will even be a salient concept of a programming language um in the way that we understand it today and in fact what we may be doing more and more as a form of interpretability which
is we're trying to understand why the bots have decided to uh structure uh code in the way that they have.
I mean if you play it through you don't need browsers then like that's the death of the browser. Well, so I I would take it a step further, which is you may not need user interfaces.
So, who is going to use software in the future?
Other bots.
The other bots. Yeah.
Yeah. And so,
you still need to, I don't know, pipe information in and out.
Really?
Well, what are you going to do then?
Are you sure?
You just going to log off and touch grass?
Whatever you want. Exactly.
Isn't that better?
I want software to do stuff for me.
Is that But isn't that better? I mean,
look, I you know, I don't know. Look,
like you know, you know all the arguments here.
It was not that long ago that 99% of humanity was behind a plow.
Right.
Right. And what are people going to do if they're not plowing fields all day to to grow food, right? And it just turns out there's like much better ways for people to spend time than plowing fields.
Yeah. Drawing.
Uh exactly. Exactly. You know, talking to their friends. And look, and I'm not an absolutist and I'm not a utopian and and to be clear, like I I have an 11-year-old and he's learning how to code and like I'm, you know, I think it's still a really good idea to learn
how to code and so forth, but I just if you project forward, you just have to think forward to a world in which it's just like, okay, I'm just going to tell the thing what I need and it's going to do it and then and then it's going to do it in whatever way is most optimal for it to
do it. Unless I tell it to do it
do it. Unless I tell it to do it nonoptimally, like if I tell it to do it in Java or in Rust or whatever, it'll do it, I'm sure. But like, if I'm just going to tell it to do, it's going to do it in whatever way is like the optimal way to do it. And then I and then if I need to understand how it works, I'm
going to ask it to explain to me how it works, right? And so it's going to be
works, right? And so it's going to be doing its own interpret. It's going to be the engine of interpretability to explain itself. And I I just am not
explain itself. And I I just am not convinced that that I'm not I'm not convinced that in that world you have these historical the goals of the abstractions will be whatever the boss need with you, right?
Yeah.
Yeah. that well I I'm curious like if that's true then shouldn't the models providers be building some internal language representation that they can do extreme kind of like RL uh and reward
modeling around because it's like today they're kind of like tied to like TypeScript and Python because the users need to write in that language versus they can have their own thing internally and like they don't need to
teach it to anybody they just need to teach their model and I think that's how you get maybe diversion between the models like going back to like the pi open cloud thing it's Oh, I built all the software using the OpenAI model and now switch to the entropic model, but
the entropic model doesn't understand the thing. So, I it feels like there
the thing. So, I it feels like there still needs to be some obstruction.
But maybe not. Maybe that's the lock in that the model providers want to have. I
don't I'm not even sure that's lock in though cuz why can't the second model just learn what the first model has done?
Like Exactly.
Okay. So, okay, give an example. So, as
you know, models can now reverse engineer software B, right? Isn't it the whole thing now where people are reverse engineering like Nintendo game binaries?
Yeah. So you you have like I've seen a bunch of reports like this where somebody has like a favorite game from the 1980s and the source code is like long dead but they have like a binary burned into a chip or something and now they're reverse engineered to get a version that runs on their Mac. Right. And so if you
reverse it this is why I kind of say if you're reversing like x86 binaries then why can't you reverse engineer whatever the Yeah. And because we're on a Unix based system it has to be reversible because it needs to run on
the target.
Yeah. Yeah. Yeah. Yeah. Yeah. Basically.
And so I just I just think it's this thing where it's just like and by the way and everything we're describing is something that human beings in theory could have done before but just with like but with enormous where but it was just always like cost and labor
prohibitive. Reverse engineer like I
prohibitive. Reverse engineer like I learned how to reverse engineer human beings can reverse engineer binaries.
It's just for any complex binary I need like a thousand years to do it but now with the model you don't. And so all of a sudden you get
don't. And so all of a sudden you get you get these things or another way to think about it is so much of human built systems sort of compensate for the human limitations.
Yep.
Right. Um, and if you don't have the human limitations anymore, then all of a sudden you have and it's not that you you won't have abstractions, but you'll have a different kind of abstraction.
Yeah.
I have two topics to bring us to a close and you can pick whichever one. So, just
talking about protocols, was it you or someone else? I forget my internet
someone else? I forget my internet history who said that like the biggest mistake that we didn't figure out in the early days was payments.
Yes.
Is that you?
Yes. 402 402 payment required.
We have a chance now.
I don't think we're going to figure it out. I don't know. Like what's your
out. I don't know. Like what's your take?
Oh, I think we will. Yeah. No, now I think it's going to happen for sure.
Yeah.
Yeah. And there's two reasons it's going to happen for sure. one is we actually have internet native money now in the form of crypto stable coins and crypto and this is I I think this is the grand unification basically of AI and crypto is what's about to happen now. Um I
think AI is the crypto killer app I think is where where this is really going to come out. Um and then the other is just it I mean it's just I think it's now obvious it's like obviously AI agents are going to need money and it's already happening right if you've got a if you got a claw and you want it to buy
things for you you have to give it money in some form. I would say the adoption is probably like 0.1% if if that. But
yeah.
Oh, today. Yeah. Yeah. Yeah. But think
think forward like where is it going forward thinking? The ultimate principle
forward thinking? The ultimate principle of everything and and everything that I think I we we do is it's the William Gibson quote which is the future is already here. It just isn't distributed
already here. It just isn't distributed isn't isn't distributed yet.
My friends who are the most aggressive use users of of of OpenClaw just like have given their claws bank accounts and credit cards. Um and and and and and not only have they done it, it's obvious that they needed to do it because it's obvious that they needed to be able to
spend money on their behalf.
Yeah. Yeah. It's just completely obvious and so and again like so the number of people who have done that today to your point is like I don't know probably 5,000 or something but it'll grow that's how these things start actually I mean since uh you keep mentioning and by the way open cloud by the way if
you don't give it a bank account it's just going to break into your it's break it's going to break into your bank account anyway and take your money so you might as you might as well do it you might as well do it by the way I really love I got to tell
you I really love the phenomenon I love the yolo um I'm not doing it myself to be clear but I love the people that are just like what is it dangerously dangerous which by the way is a Facebook thing.
Okay.
Right. Uh because we uh in Facebook they they have this culture to name the thing dangerous so that you are aware when you enable the flag that you are opting into a dangerous thing.
Okay.
And they brought it into OpenAI.
But of course that makes it enticing.
Sam Sam runs codeex uh with skip permissions on on his laptop.
Yes. 100%. And so I I I think the way to actually see the future is to find the people who are doing that. There's a
mand, you know, log everything, you know, just watch it.
Watch the logs.
But like, let's actually find out what the thing can do. And the way to find out what the thing can do is just like try everything.
Yeah. Let it try everything. Let it
unlock everything. By the way, that's how you're going to find all the good stuff it can do. By the way, that's also how you're going to find all the flaws.
I think the people who turn that on for bots are like they're like martyrs to the progress of human civilization. Like
I feel very bad for their descendants that their bank accounts are going to get looted by their bots in the first like 20 minutes. But I think the contribution that they're making to the future of our species is amazing. He's
like gentleman science, you know.
Yes. It's Yes. It's Ben Franklin out with a trying to trying trying to get lightning to strike his his balloon and see seeing if he gets electrocuted.
Yeah. It's Jonas Sulk with the polio vaccine, right? Injecting. Yes. So, yes,
vaccine, right? Injecting. Yes. So, yes,
I I I I think we should have like a we should have like flags and like we should have like monuments to the people that just let OpenCloud run their lives.
More anecdotes or like what what are the craziest or interesting things that people listening to this should go go home and do? I mean, this is this is the this is the the extreme thing is just like the straight yolo like just yeah turn your light.
That's a general capability. Is there
like a specific story that was like wow and and everyone in the group chat just lit up.
I mean like you know so there's tons of there's already tons of health you know there's the health dashboard stuff is just is just absolutely amazing.
The number of stories on I'm just don't want to violate people's you know obviously personalize but um you know one of the things OpenCloud is really good at is hacking into all the stuff in your LAN.
It's really good. So, you know, internet of things, aka internet of Like, super insecure, but great.
It's discoverable.
Yeah, it's discoverable. Open Claw is happy to scan your network, identify all the things. And then my my my friends
the things. And then my my my friends who are most aggressive at this are having OpenClaw take over everything in their house.
Yeah. It takes over their security cameras. It takes over their their, you
cameras. It takes over their their, you know, their whatever their their access control systems. It takes over their webcams. I have a friend whose claw watches him sleep. Put a webcam in your bedroom. Put the put the claw put the
bedroom. Put the put the claw put the claw on a loop. Have it wake up frequently and have it watch just tell it watch me sleep. And and I' I've seen the transcripts and it's literally like Joe's asleep. This is good. This is good
Joe's asleep. This is good. This is good that Joe's asleep because, you know, I have I have his health data and I know that he hasn't been getting enough sleep and so it's really good that he's getting asleep. I really hope he gets
getting asleep. I really hope he gets his full whatever, you know, 5 hours of REM sleep. D uh Joe's moving.
REM sleep. D uh Joe's moving.
Joe's moving. Um uh Joe might be waking up. This is a real If Joe wakes up now,
up. This is a real If Joe wakes up now, he's going to ruin his sleep cycle. Oh,
okay. It's okay. Joe just rolled over.
Okay, he's gone back to bed. Okay, good.
All right. Okay, I can relax. This is
fine.
He's monitoring the situation. monitor
monitoring the situation and and being a bot like you know is just like very focused right it's just like this is like his reason for existence is to watch Joe sleep and then and I was talking to my friend who did this like
you know on the one hand it's like all right this is weird and creepy um and I need to I need maybe this is taking over my life and then the other thing is like you know what if I had a heart attack in the middle of the night this thing literally would like freak out and call 911 like there's no question this thing would figure out how to like alert
medical authorities and like pro probably summon SWAT teams and like do whatever would be required to save my life right And so it's like you know like yeah like that's happening. What
else? Um
give um there's a company unit that makes the robot dogs. Um then and I actually have one at home which is it's actually really fun with the Chinese companies the Chinese companies are so
aggressive at adopting uh new technology but they don't always like let's said take the time to really package it package it and maybe think it all the way through and so so the at least the industry dog I have so it it
has a old nonLM just control system which by the way is not very good it markets well but in practice it's not that good it has trouble with stairs and so forth and so it's not quite what it should be but then the language model thing comes out the voice so they they
add so they add LLM capability and then they add a voice mode to it. Um, but but that LM capability is not at all connected to the control system. So, so
you've got this schizophrenic dog that like is a complete idiot when it comes to climbing the stairs, but it will happily teach you quantum mechanics, right, in like a plumbing English accent, right? Like it's just like
accent, right? Like it's just like absolutely amazing intelligence. Yeah.
Yeah. Talk about now obviously what's going to happen in the future is is they're going to connect together, but but right now it's it's and so right now it's not that useful. And so I I have a friend who has one of these who had his claw basically hack in and rewrite the
code. Write new firmware, write new
code. Write new firmware, write new firmware for the for the unit robot.
And now it's now it's an actual pet dog for his kids.
You could do that before after like the motion.
Yeah, it's you said it's completely different. He said it's a complete
different. He said it's a complete transformation and whenever there's an issue in the thing now the claw just like rewrites the code, you know, you goes in does does the code and is kind of goes to your thing here. So so like all of a sudden this is why we want to think about AI code. AI coding is not
just like writing new apps. is also
going in and rewriting all the old stuff that should have worked that never worked. And so like I I think I think
worked. And so like I I think I think basically I think the internet the internet of is basically over like I I think everything there's the potential here where like all these devices in your house that have been like basically marginal or you know basically dumb you know like all of a
sudden they might all get really smart.
Now you have smart home you have to decide if yes there are horror movies in which this is of which this is the premise and so you have to decide if you want this but but but this is the first time I can say with
confidence I now know how you could actually have a smart home with 30 different kinds of things with chips and internet access where it actually all makes sense and all works together and it's all coherent and the whole thing and to have that unlock without a human
being having to go do any of that work like yeah I'm waiting for a sorry Mark I can't let you open that fridge door you know Like, exactly. Exactly. YES. YES.
exactly. Exactly. YES. YES.
BECAUSE YOU'RE NOT SUPPOSED to eat right now.
I have all of Yes. I have every thread of health information, you know, and I know you think you're doing, you know, d I think you can do this, but you know, this is a real Are you really, you know, are you really sure?
You know, you told, you know, you told me last night you really don't want me to let you do this. So, you know, I'm sorry, but the fridge door is locked.
Um, open the fridge doors.
Exactly. And by the way, I know you're supposed to be studying for a test, so why don't we why don't you go when you can pass the test, I will open the fridge door for you.
Yeah.
Final protocol and then and then we can wrap up. Uh, proof of human. Yes.
wrap up. Uh, proof of human. Yes.
Right. That's the last piece that we got to figure out.
Yeah. So, I would say there's there's two massive I would say um uh sort of asymmetries in the world right now where we've known these asymmetries exist and we we society have been unwilling to grapple with them and I think they're
both tipping right now and and they're they're they're the same thing. It's the
virtual world version is the physical world version. So, the virtual world
world version. So, the virtual world version is is the bot problem. We're
just like, you know, the internet internet is just like a wash in bots.
Internet's a wash in fake people. It has
been forever. Um, by the way, a lot of that has to do with lack of money, you know, and so this, you know, this is this is this my spicy take was these two are the same thing and corporations are people too, you know. So,
you know. So, interesting. Yeah. Yeah. Yeah. Okay.
interesting. Yeah. Yeah. Yeah. Okay.
So, a bank account is proof of human.
Yeah. Okay. Yeah. Until until you give the bots bank accounts. Yeah. Exactly.
So, okay. Yeah. So, there's that. But,
yeah. Look, look, the bot I mean, every social media user knows this. The bot
the bot problem is a big problem. You
know, the bot the bot problem has been a big problem forever. It's it's a huge problem. And it's never really been
problem. And it's never really been confronted directly like at any point.
By the way, the physical world version of this is the drone the drone problem.
Um, right. And so we we've known for, you know, we've known for 20 years now that the asymmetric threat both in mil military in actual military conflict, but also in just like security like like you know, security on the home front, the big threat is is the cheap attack
drone, right? The the cheap the cheap
drone, right? The the cheap the cheap suicide, you know, drone with a bomb.
And we've known that forever. And by the way, like, you know, it's very disconcerting how like every, you know, every office complex in in the c, you know, in the world is like unprotected from drone attacks. um every every stadium, every school, every prison like
it like okay, we've known that we've never done anything about it. Yeah.
One possibility is just leave leave them unprotected forever and live in a world of like asymmetric terrorism forever. Or
the other is take the problem seriously and figure out the set of techniques and technologies required to to be able to deal with that whether those are lasers or jammers or early warning systems or you know personal force fields.
Kinetic personal for Dune personal force fields. Exactly. And in
both cases the these are these are economic asymmetries. These are economic
economic asymmetries. These are economic asymmetries, right? Because it's really
asymmetries, right? Because it's really cheap to field a bot, but it's very hard to tell something a bot. It's very cheap to field a drone. It's very hard. It's
very expensive to defend against a drone. But you see what I'm saying is
drone. But you see what I'm saying is it's it's the it's the virtual version of the problem and it's the physical version of the problem. Uh the virtual version of the problem, what what we need quite literally is proof of human.
The reason is because you're you're not going to have proof of bot. The the
especially now the bots are too good.
The the bots can pass the touring test.
And if the bots can pass the touring test, then you can't you can't screen for bot. You can't have proof of not a
for bot. You can't have proof of not a bot. But what you can have is you can
bot. But what you can have is you can have proof of human. You can have, you know, cryptographically validated this is definitely a person and this is and then you can have cryptographically validated this is definitely like something that a person said. This video
is real, right?
Just to double click on on uh do you think Alex Blania with world do you think he's got it or is there an alternative?
Oh, so I mean there's going to be I think there will be I think many people will try. We're one of the key you know
will try. We're one of the key you know participants in in the world in the world project and yeah so we're partisans but yeah I I think so we think world is exactly correct and and the reason is it it has it has to be it it
has to be proof of human. It it has because you can't do proof of not bot.
You have to do proof of human to do proof of human. You you need you need biological validation. You needed to
biological validation. You needed to start with this was actually a person, right? Because otherwise you have bots
right? Because otherwise you have bots signing up as fake people, right? So you
you have to have like something you have to have a bio biometric and then you have to have cryptographic validation and then the ability to do to do to do the lookup. And then by the way, the
the lookup. And then by the way, the other thing you need which they you also need selective disclosure. Um so you need to be able to do proof of human without revealing all the underlying information. By the way, another thing
information. By the way, another thing you need you're going to need proof of age, right? because there's all these
age, right? because there's all these laws in all these different countries now around you need to be 13 or 16 or 18 or whatever to do different things and so you're you're going to need you know sort of validated a proof of age um you know to be able to legally operate right and so that that's coming and then
you're going to want like proof of credit score and you know proof of like you know hundred other that's a tricky one. It it is a tricky one, but you're you're going to there there's no reason like if somebody's checking on your credit, somebody shouldn't give you an example. Somebody
shouldn't need to know your name in order to be able to find out whether you're credit worthy, right?
I see independently verifiable pieces of information pieces of information likely disclosed.
And this is the answer to the privacy problem at large, which is I I only need to prove what I need to prove at that moment. So like you're going to need
moment. So like you're going to need that and I I think their their architecture makes sense. So that needs to get solved. I think language models have tipped the bots are now too good.
uh and and so they're undetectable. And
so as a consequence, we now need to go confront that problem directly. And then
like I said, and then the other problem is we we need to go actually confront the drone problem.
The Ukraine conflict has really unlocked a lot of thinking on that. And now the um and now the the the the Iran situation is also unlocking that. And so
I think there's going to be just like this incredible explosion of of both drone counter drones.
Our drones are better than their drones as long as as long as you keep it that way.
Yeah.
Yeah. And counter drones.
I think we can sneak in one more question. Um I'm trying to tie together
question. Um I'm trying to tie together a lot of things that you said over the year. So at the Milkin Institute debate
year. So at the Milkin Institute debate with Teal which is amazing. Um you
talked about the lag between a new technology and kind of like the GDP um impact of it.
The other idea you talked about is bourgeoa capitalism and how you know this kind of managerial class was needed because of this complexity and I think if you bring AI into the fold you have like much higher leverage of people. So,
like if you have, you know, the Musk industries um and you give Elon AGI, you can run a lot more things uh at once.
That's right.
And then you have the social contract and I know you retweeted a clip of Sam Alman saying um we're rethinking the whole thing and you're like absolutely not.
Yes.
And I I was at an event with Sam last night and he actually said in the last couple weeks it felt like now people are taking that seriously. So I'm just curious like how you're seeing the structure of organization changing especially when you invest in early
stage companies and um yeah just like how the impact of work structure and uh all of that is playing out.
Yeah. So there's a whole bunch of there's a whole bunch of top yeah we could by the way we would be happy to spend more time but we could we could spend more time on all that. So just for people who haven't followed this, so the this this this term managerial comes from this thinker in the 20th century,
James Burnham who um is one of the great kind of 20th century political thinkers um societal thinkers and he sort of said as and he was writing in like the 1940s 1950s um and he said kind of that the whole history of capitalism up until that point had been in two phases.
Number one had been what he called bgeoa capitalism which was think of it as like name on the door like Ford Motor Company because Henry Ford runs the company. Um,
and Henry, it's like a dict dictatorial model and Henry Ford just like tells everybody what to do. And he said the problem with boogeoa capitalism is it doesn't scale because Henry Ford can only tell so many people to do so many things and then he runs out of time in
the day. And so um he said the second
the day. And so um he said the second phase of capitalism was what he called managerial capitalism which was the creation of a professional class of managers um that are trained not to be like car experts or to be whatever experts in any
particular field but are trained to be experts in management. And then that led to you know the importance of like Harvard business you know business schools and management consulting firms and all these things. And then you look at every big company today and like most
of the executives at most of the Fortune 500 companies are not domain experts in whatever the company does and they're certainly not the founders of those companies but they're professional managers. And in fact in the course of
managers. And in fact in the course of their careers they'll probably manage many different kinds of businesses.
They'll rotate around and they might work in healthcare for a while and then work in financial services and then go work in something else you know come work in tech. And what Burnham said is he said that transition is absolutely required because the the the problem
with boogea capitalism is is it doesn't scale. Henry Ford doesn't scale. And so
scale. Henry Ford doesn't scale. And so
if you're going to run capitalist enterprises that are going to have millions to billions of customers, um you're going to need to they're going to be operating a level of scale and complexity that's going to require this professional management class. And he
said, look, the professional management class has its downsides. Like they're
not necessarily experts at doing the thing. They're not as inventive. You
thing. They're not as inventive. You
know, they're not going to create the next breakthrough thing.
But he's like, whether you think that's good or bad or whatever is what's going to be required. And basically that's what happened right and so he wrote that book originally in like 1940 you know over the course of the next 50 years basically managerialism well I mean
today up till today manager managerialism basically took over everything and you know what I'm describing is basically how all big companies run and how all governments run and how large scale nonprofits run and kind of everything you know everything runs
basically what what what venture capital does is we basically are a rump sort of protest movement to that to try to find the next Henry Ford or which is to say Elon Musk or or the next or the next Elon Musk or the next Steve Jobs. the
next Bill Gates, the next Mark Zuckerberg. And so we we we we start
Zuckerberg. And so we we we we start these companies in the old model, right?
We we we start them out as as as as in the Henry Ford model.
And so we start them out with a founder or a or a or a founder with with colleagues, but you know, there's a founder CEO. Um and then we basically
founder CEO. Um and then we basically bet that we basically bet that the startup is going to be able to do things, specifically innovate in ways that the big incumbents in that industry are not going to be able to do. And so
it's a bet that by basically by relighting this sort of name on the door, you know, kind of thing, this new innovative thing with like a king monarchical uh uh political structure u that they're going to be able to innovate in a way that the incumbent is
not going to be able to because the incumbent is is being run by managers, right? And and and and by the way, and
right? And and and and by the way, and of course, venture being what it is, sometimes that works, sometimes it doesn't. But but we're constantly doing
doesn't. But but we're constantly doing that. But I've always viewed it my
that. But I've always viewed it my entire life as like we're like raging against the dying of the light. Like
we're we're we're we're sort of constantly trying to fight off managerialism just basically swamping everything and everything getting basically boring and gray and dumb and old, right? And we're trying to keep
old, right? And we're trying to keep some level of energy vitality in the system. AI is the thing that would lead
system. AI is the thing that would lead you to think, wow, maybe there's a third model, right? And and maybe may and way to
right? And and maybe may and way to think about it would be maybe it's a combination of the two. Maybe the new Henry Ford or the new Elon or the new Steve Jobs plus AI is the best of both, right? Because it's it's it's sort of
right? Because it's it's it's sort of the spark of genius of the name on the door model, the Henry Ford model, but then it's give that person AI superpowers to do all the managerial stuff and let the boss do the managerial
stuff. That may be the actual secret
stuff. That may be the actual secret formula. And we've never even known that
formula. And we've never even known that we wanted this because we never even thought it was a possibility. But I
mean, you know this that what is the thing that these bots are really good at? They're really good at doing
at? They're really good at doing paperwork. Like they're really good at
paperwork. Like they're really good at filling out forms. Like they're really good at writing reports. They're really
good at reading. They're really good at doing all the managerial work. Like
they're amazing at it. And so yeah, so I I think I think the I 100% I think the answer the answer very well might be to get the best best of both worlds by doing this. And then the challenge is
doing this. And then the challenge is going to be twofold. challenge is going to be for the innovators to really figure out how to leverage AI to actually do this, right? Um and then and then the the
right? Um and then and then the the other challenge is going to be for the for the incumbents that are managerial to figure out like, okay, what does that mean? Because now they're going to
mean? Because now they're going to they're going to be facing a different kind of insurgent competitor that has a different set of capabilities than they're used to. And so this really I think is going to force a lot of big companies to kind of figure out
innovation, either say figure out innovation or die trying.
Do you feel like that structure accelerates the impact on the actual GDP economy? If you look at SpaceX is like
economy? If you look at SpaceX is like the growth is like so fast and like instead of having these companies kind of like peter out in growth and impact, they can kind of like keep going if not accelerating.
That's for sure the hope. Um the the the challenge and and you know and look the AI utopian view is of course and and and that's going to be the future of the economy and it's going to grow 10x and 100x and thousandx and we're training this regime of like much higher economic
growth forever and consumer cornucopia of everything and it's going to be great and I and I hope that's true. I hope
that's that's like the you know that's the current kind of utopian vision. I
hope that's true. The problem is goes back again. The real world is really
back again. The real world is really messy. Um, and I'll give you an example
messy. Um, and I'll give you an example how the real world is really messy. It
requires 900 hours of professional certification training to become a hairdresser in the state of California.
Um, so it's like 35% of the economy, something like that. You have to get some sort of professional certification to do the job, which is to say that the professions are all cartels, right? And
so you have to get licensed as a doctor, you have to get licensed as a lawyer, you have to get licensed as a you have to get into a union. Um, by the way, to to work for the government, you need to be you you have both civil service
protections and you have public sector unions. You have two layers of
unions. You have two layers of insulation against ever getting fired for anything or anything anything ever changing. I'll
give you another example. The the doc work the doc workers went on strike a couple years ago because there, you know, robotics, you know, if if you go look at a modern doc like in Asia, it's all robots. If you go to American doc,
all robots. If you go to American doc, it's like all still guys dragging strike dragging stuff by by hand. The doc
workers went on strike. It turns out there are 25,000 doc workers working on on docs in America. It turns out they have incredible political power because it's a it's it's one of these unified blocks of things.
They won their strike and so they got commitments from the doc owners to not implement more automation. We learned a couple things in that. So number one, we learned that even a union as small as 25,000 people still has like tremendous political stroke. We also learned that
political stroke. We also learned that they it actually turns out the doc workers union has 50,000 people in it because there's 20 they have 25,000 people working in the docks, they have 25,000 people during full paychecks sitting at home from prior union agreements.
From prior union agreements. I'll give
you another great example. There are
government agencies. is there are federal government agencies where the employees right of have civil service protections and they're in public sector unions. There are entire federal
unions. There are entire federal government agencies that struck new collective bargaining agreements during COVID where not only are they have their jobs guaranteed in perpetuity, but they only have to report to work in an office
one day per month. And so there are entire office buildings in Washington DC that are empty 29 out of 30 days of the year that are still operating and are still we're all still paying for it. And
so and then what they do, it turns out what the employees do is they're very they're very smart in in in this way.
And so they figure out they come in on the last day of a month and the first day of the next month.
And and so they're so they're in their they're in the office 2 days per 60 days, which means these buildings are empty for 58 days at a time.
And you see you see where I'm heading with this. Like this is like locked in,
with this. Like this is like locked in, right? This is like locked in in a way
right? This is like locked in in a way that has nothing to do with like people say capitalist. It's like anti-
say capitalist. It's like anti- capitalistic. It's like it's it's
capitalistic. It's like it's it's basically it's restrictions on trade.
It's restrictions on the ability to like change the workforce. And so so much of our economy is is is you know the I'm I'm describing the entire healthare system. I'm describing the entire legal
system. I'm describing the entire legal profession. I'm describing the entire
profession. I'm describing the entire housing industry. I'm describing the
housing industry. I'm describing the entire education system. Right? K
through 12 schools in the United States, they're a literal government monopoly.
How are we going to apply AI in education? The answer is we're not
education? The answer is we're not because it's a literal government monopoly. It is never going to change
monopoly. It is never going to change the end. And there is nothing to do. By
the end. And there is nothing to do. By
the way, you can create an entirely new school system. Like that's the one thing
school system. Like that's the one thing you can do is you can do what Alpha School is doing. you can create an entirely new school system. Other than
that, you're not going to go in and change what's happening in the American classroom like K through 12. There's no
chance. The teachers are 100% opposed to it. It's 100% not going to happen. So,
it. It's 100% not going to happen. So,
so you see what I'm saying is like there's this like massive slippage that's going to take place.
Both the AI utopians and the AI doomers are far too optimistic, right?
You see what I'm saying? Because they
believe that because the technology makes something possible that 8 billion people all of a sudden are going to change how they behave. And it's just like no. So much of how the existing
like no. So much of how the existing economy works is just is just like wired in. And so we're going to be lucky as a
in. And so we're going to be lucky as a society. We're going to be lucky if AI
society. We're going to be lucky if AI adoption happens quickly, right? Because
if it doesn't, what we're just going to have is stagnation.
Mark, I know you got to run.
Yeah. We all know or welcome, but it was such a pleasure talking to you. Uh we're
truly living in an age of science fiction coming to real life.
Yes. Yes. Could not be more exciting.
Really with you guys. Awesome. Good.
Thank you.
Hey,
Loading video analysis...