LongCut logo

Why Balaji Srinivasan Thinks the SaaS Apocalypse Is Overhyped | The a16z Show

By a16z

Summary

Topics Covered

  • AI Slop Reveals Lazy, Stupid, or Evil Intentions
  • Distribution Beats Cloning: SaaS Has a Survival Advantage
  • Simple Scalable Private Digital Cash Finally Arrived

Full Transcript

AI doesn't take your job. AI makes you the CEO. The problem is AI is a shortcut

the CEO. The problem is AI is a shortcut and a shortcut is good except when it's bad. If you don't know how to go the

bad. If you don't know how to go the long way around, then you can't debug the AI.

Do we not think that AIs are just going to be also better at taste and agency?

I don't think that's true on a short-term basis. Humans are the sensor

short-term basis. Humans are the sensor A is the actuator. So, it's like a human machine synthesis. What's taste? Taste

machine synthesis. What's taste? Taste

the sense. And that is what AI can't yet do.

What happens when AI really achieves its potential? Will LLM get us to AGI in

potential? Will LLM get us to AGI in some capacity? Uh, no, no, actually the

some capacity? Uh, no, no, actually the opposite. TLDDR is

opposite. TLDDR is I want to start by talking about the the AI economy and I'm curious if you think it will look more like um the internet economy where apps um applications take

most of the the the the value or the cloud economy where there's kind of a you know infrastructure takes most of the value or it's more distributed. You

know, there's an argument that the the big labs will will take it all because they have all the capital, they have the compute, they've vertically inte vertically integrated. But there's also

vertically integrated. But there's also an argument that hey, um maybe they won't because you distillation is like 98% cheaper than than it is to to build a model. You know, open source catches

a model. You know, open source catches up and apps, you know, maybe control the the user relationship. How do you think this economy is going to play out?

Great question. So I do think that at least a very large percentage of the future is going to be distillation and decentralization

uh because you know as anthropic said distillation attacks work on their thing right and so a relatively small number of API queries helps to kind of distill a large model into something small and

it's very hard to stop that right because you're stopping queries from coming back you'd have to somehow detect that or what have you right and it's also it's hard to morally stop it Because what do they do? They copy the whole internet and put it into their

thing, right? So talking about stopping

thing, right? So talking about stopping the copying. It's like Facebook or

the copying. It's like Facebook or LinkedIn stopping someone's from scraping what they scraped, you know, right? Like uh Facebook scraped all

right? Like uh Facebook scraped all these Harvard social networks or Google Google scraped the entire internet, built a Google index. I get why they want to do it, but it's hard to hard to support that. Okay. So the other thing

support that. Okay. So the other thing is I think the future is personal, private, programmable because AI is so powerful that you want to use it within

the trusted tribe for a variety of reasons. First is it doesn't miss okay

reasons. First is it doesn't miss okay or rather it doesn't miss small things in large data sets and things that were effectively secure through obscurity. A

small example but an important one is the Jmail thing, right? like the Jeffrey Epstein thing, you can query like this guy had never thought that all of his

emails would be publicly indexed and searchable by by AI 10 years later or what have you, right? So you can issue queries that will synthesize information across

thousands of emails or whatever and build a story right then and there.

Okay.

So what that means is it's not just surveillance. It's what the French call

surveillance. It's what the French call surveillance. Surveillance from below or

surveillance. Surveillance from below or even the German Bentham panopticon where everybody's watching each other. Any

information that's in the public gets indexed and then put into these guys where people can stalk each other and so on and so forth. And then what that means is the commons becomes a hall of mirrors with all kinds of pseudonyms and

so people retreat back to caves and tribes. Okay. So within that trusted

tribes. Okay. So within that trusted tribe, yes, if you share all your code within the trusted tribe, you share your whole codebase, boom, you can zip along.

And so AI increases productivity within the trusted tribe. But outside the trusted tribe, aren't you getting a ton of AI spam and AI, you know, AI spam emails, AI spam replies, right?

Lowquality slide decks that are sent over. You know, people will send me

over. You know, people will send me these slide decks and and I love AI.

Okay? And you know my reaction is to seeing AI in a slide deck.

What? What? Excitement?

Uh, no. No, actually the opposite. When

I see AI text in a slide deck, and you can immediately see it. Why? Because no

matter how advanced AI has gotten, there's a generic look to it. You know

what I mean? It's it's like somebody who doesn't change the Windows default desktop wallpaper, right? Or the Apple default. Like, you can most people don't

default. Like, you can most people don't change defaults.

So default AI looks like AI no matter where the level of it is. Do you know what I'm saying?

Like and so because of that when I see an AI slide deck and it's got it's not this, it's that or it's just got like a wall of text, right? AI can generate what I call Lauram Ipsum, but it's

Lauram AIPSM. Okay. When I see that and

Lauram AIPSM. Okay. When I see that and you know it's it's AI text or AI images, I think they're lazy, stupid, or evil.

Okay, lazy because they just hit a few characters and then they throw some thing over and they didn't, you know, like the Mark Twain thing of uh I didn't have time to write you a short message,

so I wrote you sent you a long one, right? I didn't have time to write you a

right? I didn't have time to write you a short letter. The whole point is

short letter. The whole point is concision is very valuable. So, they're

lazy because they didn't actually put in the time to make it concise and so they sent me some blah like it's almost like pasting in a search result. Or they're

stupid because they uh they don't understand that I can tell the difference instantly between AI slop versus something that had some care go into it. Or they're evil where they're

into it. Or they're evil where they're trying to get something over on me and try to send something that's clearly fake or not properly diligent and so on and so forth. And the thing is, if I

have that reaction, okay, as one of the most pro tech people out there, pro tech, pro AI, see all the benefits of AI, I can only imagine

how mad anti-AI people will be, right?

Where they can't see the upsides of a thing, right? They can only see the very

thing, right? They can only see the very real downsides, right? And just to just to say why that that those happen. AI is

for AI does reduce the cost of generation but it increases the cost of verification and many markets like for example quickly generating a resume is not that

much better than just writing it yourself. But now verifying a resume has

yourself. But now verifying a resume has gone up and to the right or like you know right so because it's something where it used to be that somebody would have to sort of have a certain vocabulary

to be able to write a well done cover letter or resume or so on and so forth.

And now you have to spend more energy parsing that because they can have a similar chrome of something that kind of looks good right so now you have to very closely read it. So you have to spend you can still do it but you have spend

more energy on verification. So what I do for example is I fly everybody out for interviews first. I have do in person and I give them proctored exams offline exams

because they can AI the online and just the credible threat of doing the offline means they don't use AI on the online exam for example right and so AI is going to create tons of jobs in proctoring and verification. This brings

me back to where's the future of AI. I

actually think AI makes the internet a lot more like the Chinese internet. You

know why?

Why? Chinese companies.

If you look at the Chinese tech ecosystem, and many Americans aren't familiar with it, I'd recommend, it's a little bit dated now, but read Kyu Lee's book, AI Superpowers from several years ago. Okay. The main thing about Kyu

ago. Okay. The main thing about Kyu Lee's book is it has a history of the Chinese tech ecosystem where, for example, you and me being in tech, we kind of know how, you know, Microsoft came up, Apple came up, Google,

Facebook, you know, Amazon, whatever. We

have we have some idea of the history.

And that history is important because you know there's things that worked in the past that didn't work today and now they can work and so on and so forth.

The Chinese tech ecosystem is like the Galapagos Islands where many of the same kinds of things exist but in different form. For example, Mtoan which is like

form. For example, Mtoan which is like the closest way of putting it the Chinese Group on but if Groupon was executing at like hundred billion20 billion dollar scale you know so they're

very competent like if Groupon and Door Dash and so on and so forth all became integrated into one amazing kind of app right the point about the Chinese tech ecosystem is because they arose in a low

trust society they don't have SAS not in the same way that we do instead because if oh my data is on their servers well they're probably easedropping on me, right? My days on their servers, they're

right? My days on their servers, they're probably going to copy my stuff, right?

They just assume that the other guy on their side is going to look at their stuff unless it's like their close friend or something like that. And so

because of that, everybody codes their own stuff, which obviously has a frictional cost to it, right? Because trust reduces

it, right? Because trust reduces transaction costs.

However, so they have to rebuild, they have to reinvent the wheel over and over again. They have less division of labor

again. They have less division of labor and so on and so forth. software isn't

as good because they have to keep rewriting the software.

Now with AI, many companies can do something like that. Like a non-Chinese tech company can be like a Chinese tech company where it can have a lot more,

let's call it digital autoarchy, okay? You have high tariff barriers on

okay? You have high tariff barriers on the outside world, so to speak, right?

And you just, you know, the build versus buy question has always been there. Do

you build it yourself or do you buy it?

And it does mean that you can build more internal tools with emphasis on internal tools. And the reason I say that is what

tools. And the reason I say that is what I find AI great for as of today. Um visuals over verbal, right? It's great for images and video

right? It's great for images and video as opposed to big blocks of verbal text.

Why images and video? We have built-in GPUs so we can instantly see if something is wrong like the hands are messed up or something like that in an image, right? So you can you can quickly

image, right? So you can you can quickly verification is relatively cheap visually, right? Um, for example, if you

visually, right? Um, for example, if you look at a piece of paper and and it's got static or something on it, right?

Like a crumpled piece of paper versus if you look at two three faces. Our brains

are optimized for checking very subtle things off in faces, but not in crumpled up pieces of paper. You know, those that's a pattern of noise that we wouldn't be able to tell. And that also extends to web pages. For example, you

can quickly look at a web page that AI generates or a mobile app and you can see if the UX looks janky, which it often does, right? And then you can you see that it's broken there and you can fix it. Also front-end stuff has lower

fix it. Also front-end stuff has lower risk than verbal stuff, right? For the

back end, you know, if you are verifying each pull request one at a time, fine.

But people who've tried to go full auto on AI, you saw the Amazon thing where they've called all hands because of the outages.

Yeah.

The problem is AI is a shortcut.

And a shortcut is good except when it's bad. So the more expert you are, you can

bad. So the more expert you are, you can use a shortcut. For example, um if you just memorized e to the i pi + 1 equals

z, you could just rattle that off. But if I asked you to prove it from first principles, right, you'd have to know

the definition of a complex exponential and you know like uh how the the exponential generates to a function of complex variable and you know all that

kind of stuff, right? Um,

and so if you like our generation that is a preAI generation learn all that stuff offline and we can actually use the shortcut

because we know how to go the long way around.

If you don't know how to go the long way around, AI is a shortcut. Then you just don't really actually know. You can't debug the AI. And I I think the biggest

the AI. And I I think the biggest difference between me versus Daario or you know like uh you know basically like his view of the world perhaps

is I think AI is built for the harness at least for now. May maybe you know by the way he's an amazing engineer and entrepreneur and so on. Maybe I'm wrong.

Okay. So I I put a asteris on this. Um

but the whole alignment thing means that AI is built to start when you prompt it.

like economically useful AI does exactly what you want it to do. it like you know you prompt and it does a pirouette and then it says you know absolutely right you know right like how how you saw that

animated in the physical world and physical AI the Chinese AI the robots do exactly what they want them to do and then stop now in the physical world by the way that's another thing so AI for

visuals you can verify it with your eyes right AI for certain kinds of backend code you can uh unit or integration test it and you can review it

AI for the physical world is very verifiable because the thing is the digital world is fundamentally decentralized in a way the physical world isn't. There's only one physical

world isn't. There's only one physical world, right? So you can say did the AI

world, right? So you can say did the AI move this box from this pallet to that pallet.

That is something where you can get it to probably 100% over time. Why do we think so? Because self-driving

think so? Because self-driving eventually got there, right? Move this car from this location

right? Move this car from this location to this location at 100% reliability. There's only one physical world. So eventually all the

physical world. So eventually all the sensor data, all of that converges on one thing. By con by by by contrast, the

one thing. By con by by by contrast, the digital world, there's all these people who live in their own constructed environments. Harry Potter fanfiction

environments. Harry Potter fanfiction here, Star Wars fan, right? And so AI is slurping up all of this stuff. And so

it's simultaneously it can it can put you in some secret agent, you know, kind of world, right? star, you know, and people who have LM psychosis will talk to the AI and think it's real because a very immersive virtual world that they

live in. You know what I'm saying?

live in. You know what I'm saying?

Right?

So, the other thing about it is the boundary of a digital task is almost always more fuzzy than the boundary of a physical task. Like having a 100 boxes

physical task. Like having a 100 boxes here and moving them over there, you know when you're done, right? How do you know when you're done

right? How do you know when you're done with your to-do list? That's harder,

right? those things are fuzzier, right?

So verification is actually harder in the digital world than it is in the physical world, which means reinforcement learning and training is much easier in my view in the physical world with robots and self-driving cars, drones and so and so forth. So the

Chinese style of physical AI will also be successful. So AI works for visuals,

be successful. So AI works for visuals, AI works for the verifiable and AI works for the physical when it is uh one of my rules and it took me a little while to articulate

this but four words. No public

undisclosed AI.

Why? There's a temptation by many.

There's going to be there is a huge backlash called let's just say no AI.

It'll be like a drunk who just wants no nothing to do with it, right? And AI is like, it's a funny way to put it like alcohol. People analysis to nuclear

alcohol. People analysis to nuclear weapons, but I'll just analy to alcohol for a second. Some cultures simply like they can't hold their liquor. you

know, maybe they lack alcohol dehydrogenous or what have you, you know? Um, and so they just ban it,

know? Um, and so they just ban it, right? They just like they can't because

right? They just like they can't because sometimes it's easier to say, I will not do this at all than I'll do this a little bit of the time. It means people will slip, right? It's like saying, "I'll work out every day versus I'll

work out some days." It's just easier to kind of keep the habit all the time, you know, sometimes, right? So, it'll be AIT totalers that just swear off it completely,

right? And you know Nate Silver actually

right? And you know Nate Silver actually had a great line where he said AI for him because he's like a poker player among other things. He's like it's a gamble. Why is it a gamble? Because I

gamble. Why is it a gamble? Because I

have to formulate it and dispatch it to the AI and then verify the result. And

often that's slower than doing it myself. And I'm sure you've seen that,

myself. And I'm sure you've seen that, right? like the the the act of prompting

right? like the the the act of prompting and writing it down and then verifying the result. AI doesn't really do it end

the result. AI doesn't really do it end to end necessarily. It does it middle to middle as we've talked about, right?

And it's very much like do I delegate this to an employee or do I just do it myself, right? because articulating it out in

right? because articulating it out in clean English and hitting enter is sometimes slower than just, you know, like like for example, if you're describing what to do

in a video game, jump over the mushroom to this that, right? Versus just hitting A B C and there and being non-verbal about it, right? It's sometimes easier to do that way. That's just like a proof

of concept, right? Where you'd be like there's certain kinds of things that are harder to say than do.

Okay, those types of things where it's hard to verbalize what it is, right? And

some people will say, "Oh yeah, Neuralink will solve this." The

difference is, you know, they'll say, "I'll just read your mind and tell you," which is actually it's worth engaging the concept because Neuralink exists.

But I don't know if you've seen those things where like they image somebody's brain, there's nothing in there, right?

So the thing is with Neuralink, somebody still has to like form the concepts in their head for before the characters appear on screen.

You still have to like write the thing in your head. It like like maybe it'll eventually get to the point that it can determine what you want based on contextual clues before you even want

it, right? Perhaps. Okay. The rich

it, right? Perhaps. Okay. The rich

prompt, you know. Uh the reason I think that's not impossible by the way, at least for certain things, bio could be very important. You know why?

very important. You know why?

No. Say why?

Your body is creating all kinds of sensor data. If you look at gene

sensor data. If you look at gene expression data, right? If you ever gotten labs back, you've done a clinical lab, right? You get a vector of your

lab, right? You get a vector of your Billy Rubin and hematocrit and so on and so forth, that vector over time is like a table of time series data. It's like K

um you know small molecules and you know uh gene expression levels and so on over t time stamps, right? They might also have you know which tissues. So it's

spatial as well, right? So it's time versus space versus compound. That's

this big. It's not just a cube, but it's at least a cube. It's like, you know, time versus tissue versus um molecule.

That huge stream of data is telemetry that's coming out from your body that could prompt AI without you v vocalizing or verbalizing anything.

Okay. Years ago, Mike Snder had a paper called the integr. By the way, you you you know, for the audience who doesn't know, biology is actually, you know, I'm not really I mean, I'm a crypto guy or,

you know, I'm a tech guy, but actually before all of that, I'm a biomedical researcher. I I I was a professional,

researcher. I I I was a professional, you know, bionformatics genomics scientist at Stanford and you know, I I I taught there and I, you know, founded genomics. We sold that. So, that's

genomics. We sold that. So, that's

actually my true core competency, right?

So if you go back years, Mike Snder, professor at Stanford, wrote a paper on the intergrome and the idea was just put every test, you know, throw every test.

Now today we call that wearables or quantified self, but more invasive than that because he's doing blood testing and so on. And he just measure it and see what he could figure out. He could

see that he was getting sick before he he knew he was getting sick. Like he

could detect he could see the antibodies, the white blood cells, neutrfils, whatever moving before he himself had any symptoms. Do you understand what I'm saying? Right? So that stream of data AI

saying? Right? So that stream of data AI could act on that and then you're prompting it non-verbally. You don't

have to spend time, right? So I'm not sure whether Ah, this is a good one, Liner. I'm not sure whether AI will be

Liner. I'm not sure whether AI will be able to read your mind, but it can read your body.

Is that good?

Yeah. Yeah. Okay.

All right. Let me give another one.

Here's a fun one. Okay, I can say this one. Maybe I can say this one. I can say

one. Maybe I can say this one. I can say half of this one. All right. Another way

of modeling what AI is, right? So Darius

talked about oh AI will be uh like it's like new countries. Well, you know, I thought about that a fair bit myself, right? So um one way to think about it

right? So um one way to think about it is AI is like the rise of Asia and India from an American perspective, right? AI

is like Asians and Indians. Why? because

you have uh like the rise of a billion Chinese and a billion Indians meant that from an American perspective you could get anything done by uh a physical

manufacturing robotic warehouse or by digital outsourcing for some price if you could articulate it to them over that channel right so imagine you've got now a billion factory robots and a

billion digital agents that have come online it's like the rise of China and India again Okay, that still means you have to describe what the product is.

Okay. And the part where I depart from a lot of people is they think AI will be able to sense um let's call it markets and politics.

Okay. But I don't think it will. And the

reason is or or if it is, it's it immediately gets decentralized and adversarial. And what I mean by that is

adversarial. And what I mean by that is like when you're learning whether something is a dog or a cat, the dog isn't like shapeshifting on you and morphing on you to defeat your learning

of that, right? The mapping of dog to the character's DOG is basically constant over time.

And so that fits the train test paradigm of AI. Similarly, like the rules of

of AI. Similarly, like the rules of chess are constant over time, right? But

a market is set up where if you try the same trade, then someone eventually figures out what trade you're doing and they take the opposite trade. it doesn't

keep working, right? You know, in a stochastic process sense, you'd say um it's it's not a time invariant thing, right? The cis distribution, it's not

right? The cis distribution, it's not time invariant and it's also adversarial. It's multiplayer where

adversarial. It's multiplayer where whatever move you're doing, somebody else in the market is going to try and do another move. Okay?

And that's not to say I mean like the counter argument AI guys will say as well, you know, AI can learn to play adversarial games like Starcraft and stuff like that. And I say, "Yeah, but then you play an AI versus AI because you have a decentralized AI." So the

other guy on the other side of the market is also using it, right? And in

fact, if they're all using the same AI models, then actually being non AI is where your edge comes from. We come back to the where we were because these are all the same generic tool that everybody got. And if you have a generic tool,

got. And if you have a generic tool, you're not going to get specific advantage, right? What you provide to the table is

right? What you provide to the table is specific. The eyes are generic. And

specific. The eyes are generic. And

similarly, politics is very similar. If

you just had the same tweet over and over again, unless it's like weather or something like that, there is um like the kinds of things people are interested in change topics, what's

timely, what's not timely, right? So AI,

one way to think about it is humans are the sensor. AI is the actuator.

the sensor. AI is the actuator.

Okay, humans sense the world. They sense

the financial conditions, the market conditions, political conditions, and then they bring that back into a cleanly articulated English prompt, and then the AI does it.

Right? Humans are the sensor as the actuator. So it's like a human machine

actuator. So it's like a human machine synthesis like uh actually you know way good way of putting it. What what are people saying? Oh it's all about taste.

people saying? Oh it's all about taste.

What's taste? Taste the sense.

Yeah. Yeah.

Right. So humans are the sensor. AI is

the actuator. Your quote taste is your sense. Your sense of taste is your

your sense. Your sense of taste is your sense. Right? So you're sensing the

sense. Right? So you're sensing the world.

And that is what AI can't yet do. It

doesn't really sense the world in the same way that humans do, right? Why is

it it's a it's a it waits for your prompt, right? It is something that

prompt, right? It is something that animates when you give it instruction and it shuts off right away. And if it didn't, it would not be economically useful AI. Like if you couldn't kill

useful AI. Like if you couldn't kill switch it right away, it would burn tokens.

Like, so AI is designed for the leash.

Digital AI designed for the leash. and

Chinese communism which is cranking out all the physical robots like they don't let their humans off the leash. They're

definitely not going to let their robots off the leash. Okay. Right. So

the the concept of uh like AI is god is I think gone away or at least the monotheistic agi kind of god. Instead,

you have polytheistic where there's all of these decentralized AIs. And I think what people are going to say, certainly in China, they'll say, "Oh my god, the physical AIS are slaves, right? They're

actually, right?" And it's a provocative way of putting it, right? But they'll be first they're scared that their AIS are going to be gods. They'll be mad or or they'll be, you know, what do you call them? Slaves, surfs, whatever, you know,

them? Slaves, surfs, whatever, you know, term you want to use. They're obviously

not humans, right? It's a, you know, it's a way of phrasing it. The point

being that like AI overlords I don't actually think are in the offing.

However, there's been so much sci-fi about them that people will, you know, that meme where the guy, he makes the monsters and he's so scared of the

monsters. Okay, this is how I think of a

monsters. Okay, this is how I think of a lot of people who are, you know, like these when you're prompting the AI and you prompt it to be like act as if

you're a Skynet Terminator, right? Then

people are just scared of the thing that they themselves created, right? Okay.

With that said, is it in theory possible to actually create a Skynet which actually um like the a truly autonomous AI? One of the reasons by the way a deep

AI? One of the reasons by the way a deep point AI can't reproduce itself, right?

And AI by it's very general. It

encompasses many things, right? But for

an AI to actually reproduce itself, it would need to have physical robots going and mining ore and constructing data

centers and making chips and handling that full supply chain and uh and then the AI brain like the queen of an ant colony would have to give instructions to all those robots to do things. It

would be this Terminator Skynet scenario where it's like self-replicating in this way, right? Way before it gets there.

way, right? Way before it gets there.

I'm pretty sure that kind of thing will be stopped in from the Chinese because they will just have cryptographic keys that will just make all those things shut off. Okay. And more of that thing

shut off. Okay. And more of that thing would have to get to extreme scale. It's

like, you know, the rep wrap concept, the self-replicating kind of thing, right? Self-improvement.

right? Self-improvement.

Basically, there's so many frictional breaks that are built into this that I think it's hard because the physical world requires resources to replicate, right? And so like what humans human

right? And so like what humans human wants and needs ultimately come from okay get you know the resources for reproduction right that's really what it comes from okay and of course there's

all kinds of things that are high level philosophy blah blah that don't seem to relate to that directly but the resources for reproduction are a good way to macro think about it doesn't have

goals un or it won't have unless its goals lead to reproduction it doesn't actually you know uh it doesn't virally spread It's possible you could have something where it self-prompted itself

and did that, but it would need to be in the closed loop of being able to actually reproduce itself as a payoff function for that.

Then you could get evolution going. So

I'm not saying it's completely impossible, but I'm saying that I think the incentives are set up in such a way to prevent that from happening in the same way that in theory we could have a world where everybody went around electrocuting themselves from

electricity, but we set up the electricity under such tight controls that that is not the world that we have.

Okay. Y

there's such strong economic incentives for humans to not get electrocuted that we set it up that way, right?

And um even the stuff on oh it could be a softer virus that takes everything over and commandeers things. Well, like

that's only in the digital realm, right?

You can still, you know, what what what's the uh uh you know the Tyler the Creator thing?

Yeah, the meme uh about bullies.

Yes, that's right.

That's right. So, I actually had a um I had a post on that a long time ago, uh which is a remix of it, which is like how how is AI risk real? Just turn it off.

The whole thing is set up for you to be able to turn it off. Like you have to imagine the off switch goes away, right?

What does every computer have? It has

the off switch, right? So, there might be well what if the decentralized? Okay,

but humans still have to keep these decentralized systems going, right? And

so at a minimum you're talking about a human AI symbiot of which like you know a cryptocurrency is almost like a vzero of that where the the software provides an incentive for the humans to replicate

it you know right um and so it's possible that you could have something like that there's a model that has a cryptocurrency and you people worship it and they replicate it because

it gives them advantages and so it's possible but anyways coming back um I think at a minimum decentralized AI will be a very strong contender and it's

possible it's the only contender. The

reason is AI might be an interesting thing where it's relatively expensive very expensive to create but relatively easy to copy with dissolation attacks.

And I think if for example let's say completely hypothetically that there was an enormous capital markets crash and it was very difficult to fund anything for

a while. Then as somebody said well we

a while. Then as somebody said well we could get 10 years just on the models we have now. Right? And by the way,

have now. Right? And by the way, sometimes that happens. You know,

nuclear energy, there's a lot of energy put into nuclear energy and then there's just just stopped for decades, right?

Not everything accelerates to the moon.

It is very possible that there's enough of a capital and social kind of thing where some of AI is paused for a while just due to capital constraints because

it it's more and more expensive to make these models, you know. Sorry. So, let

me pause there. So that that putting that all together that's my view is you're going to have personal private programmable centralized AI. Oh one

other thing the trusted tribe AI within the trusted tribe increases productivity between trusted tribes decreases productivity. So you make more

decreases productivity. So you make more money perhaps within the tribe but then you have to spend it on verifying stuff between tribes. So crypto is for between

between tribes. So crypto is for between tribes and AI within tribes.

What do you think of the like will LLM get us to a world where it's not just middle to middle but it's actually end to end? you know, will it get it to AGI

to end? you know, will it get it to AGI in some capacity? You know, do you believe in recursive self-improvement or sort of AI training the AIS in in some capacity? You know, are LMS capable of

capacity? You know, are LMS capable of actual creativity and invention? Um, you

know, we talked about bio earlier like will we have, you know, novel, you know, math, science, you know, uh, scientific re research. um or do we need new

re research. um or do we need new architecture for that or are you dubious of just the idea in general that that AI can can uh you know replace or substitute for human labor in a in a

mass scale?

No. Well, so I'm not well look exists, right? So obviously you have full

right? So obviously you have full replacement of human drivers there just like you have full replacement of elevator operators, just like you had full replacement for the most part of artisan old chair manufacturers.

So it is certainly possible for a given job that it gets fully automated, right?

And so but I think physical world jobs because of the verifiability are easier to potentially automate.

That said, I think that um let's take each of the things because you said a few different things. First

is physical world jobs if you automate them. Well, we went from artisal work

them. Well, we went from artisal work with chairs to chair factory. It's not

like you didn't know need to know how to make a chair to set up a chair factory.

You still need to have somebody there who's like an expert in chairs and you can just do a lot more varieties of chairs a lot more cheaply. You have to verify the result. You're cranking out a thousand of them. You start doing math

on them. The scale goes up but and and

on them. The scale goes up but and and the artisan gets factored out into the manager and the technician, right? So the the manager is setting up

right? So the the manager is setting up the factory and looking at the economics and you know so and so forth. Then

technician is debugging the factory when it doesn't work, right? So engineering

gets split into the uh the engineering manager type person who's writing the prompts and the technician is doing the verification.

Okay. And uh

I think that we're going to hit we're already hitting a point where like the velocity does increase so the

bar increases. But I, you know, there's

bar increases. But I, you know, there's a big difference between going to 100% and and being at 99%. At 99% your workload just increases. At 100% you stop doing that job and you go to

something else, right? But if you think about how much easier it became to like put images, video, so 90 making it 99% easier just means people do it a lot. At

100% easier, totally done, then they don't do it at all and they move on to something else, right? So elevator

operating, it's not like elevator operating became so much easier. In

fact, it became so easy that you don't even have somebody sitting in the elevator and and because it used to be like a pulley system and so on and so forth. She had someone like supervising

forth. She had someone like supervising the thing, right? It's more analog, right? Um and they would like level it

right? Um and they would like level it out at exactly the right, you know, level. Um when it became digital and

level. Um when it became digital and fully automated, that that's actually the first self-driving car. Haha. Right.

Like going up and down. All right. Um so

I think Bendic may have made that point or something like that. Right. The

vertical self-driving car, right? It's

like a train. It's like a vertical train.

So the uh now in terms of discovering new math and science yes if you have the right prompt it's amazing in terms of searching the literature mathematicians physicists are

starting to get some value out of it right like opusk like huge props to them on that because and especially in like biology we're synthesizing all these facts there's something called biomedical text mining and so on AI's

revolutionized that because biology was just something where the the the the facts were stored in English in this weird inconsistent way across thousands of papers and nobody could span all of that. Right? So AI is going to mean the

that. Right? So AI is going to mean the century of biology because finally all of this work that was spread across all these different journal papers can be synthesized and understood. Right?

That's a really really really big deal.

Just simply the bio aspect of it we can but but that said it's everything we knew not everything we don't know. It

means that you take the full set of everything we know and you fill in all the intermediate aspects of it. Right?

And you can do that for a long time like because there's so much there you know so much there that's just a synthesis of two existing areas right but when you look at some of these like you you know

Donald Kut the other day right he posted like some graph theorem or something he was so impressed that AI could could get a result for him right if you've read what he did you'd have I

mean you'd have to be expert to even know what he was saying let alone to verify like to either prompt or verify you already needed to be an expert Because and the thing is I can see AI

spit out to some people it convinces them that they're suddenly physicists have solves quantum gravitation or something like that. You know what I mean? Have you seen that kind of thing?

mean? Have you seen that kind of thing?

Right? So in the absence of actually being able to verify it by hand, some human has to verify it to say that it's right. I think that's going to persist.

right. I think that's going to persist.

to give an analogy. This is not a perfect analogy, but like with Coinbase, we thought like listing would eventually go away and not be a big deal and that people wouldn't care and everything

would be listed and just be free market or whatever. But there's always

or whatever. But there's always something that's the equivalent of listing. Like, okay, you list it over on

listing. Like, okay, you list it over on this exchange, but like guessing listed on Coinbase in the main app above the fold, there's always something scarce because human attention is scarce, right? So, listing never went away as

right? So, listing never went away as like in a main event. There's always

some IPO like thing. Yes, we're listed on this exchange in this fashion, right?

Or we became a top 10 coin or something like that, right?

So, in the same way, I think whatever gets automated, then in a sense, humans work, human human work moves to what can't be automated. Now,

that may be almost like um like things that humans are picked for because they're not robots, like human companionship or something like that, right?

um or like uh personal trainers or things like that, you know, something where the whole point is that it's a human as opposed to a machine. Another

way of putting it is remember the digital divide, right? So in the 90s there's the sub

right? So in the 90s there's the sub only the rich people will get the digital and all the poor people will be left without. We're actually going to

left without. We're actually going to have the opposite.

Digital is cheap. Physical is a premium product, right? So AI, robots, digital will be

right? So AI, robots, digital will be cheap. Human is a premium product.

cheap. Human is a premium product.

Okay. But going back to agency and taste, that's that's what everyone says.

You know, humans will do. We we've seen over time and time again, AI just, you know, cut into that. Do we not think that AIs are just going to be also better at at taste and agency?

I don't think that's true on a short-term basis. I think um

short-term basis. I think um the smarter you are, the smarter the AI is, right? That's been now true for the

is, right? That's been now true for the last several years, right? It's possible

there's some huge step change. Okay? But

in so far as where you're typing in a prompt is like you're the human is a sensor, the AI is the actuator. You're

sensing the world. You're typing

something in and it's a very highdimensional vector you're giving it.

It's like AI is a spaceship and you're pointing in a direction and whether you prompt it in Portuguese or Tagalog, whether you're talking about math or like the number of different directions you can point the thing in is enormous,

right? It it can't that that direction

right? It it can't that that direction setting is something where it has to know something about you and what you want at that moment, right?

I don't know, as I said, I think um I'm not sure if AI can read your mind, but it could be able to read your body, right? I think it's a oneliner, right?

right? I think it's a oneliner, right?

that the like biotech can prompt it in your sleep, right? So all the wearables and stuff like that, I think you'll get a lot out of that. Okay.

But I don't believe like agency and taste. Um so I mean people I think they

taste. Um so I mean people I think they overrotate on this. It's not really the case that there's I think agency, IQ, taste are correlated.

Okay. It may be that it's a little bit like uh most people in the NBA are tall to take something that you know a lot about right?

within the NBA.

Um, height is not the number one variable that you think about some, you know, like Steph Curry is not the tallest or whatever, right? However,

it still actually does correlate with scoring average if even within the NBA, but it's what's called restriction of range. Everybody's already tall. So,

range. Everybody's already tall. So,

conditional on everybody being tall, other variables matter more.

Okay? However, if you just took tall guys and short guys and put them on a court, then height the taller team basically wins typically, right? Because

they just hold the ball above you. Ah,

you know, right? Okay.

So, in the same way like people who are already smart might see that yeah, higher agency people or people with better creative taste. Fine. Right. Like

uh and maybe a technician role is less or and and maybe the Steve Jobs type role is more. But honestly

like one way of looking at it is all of the Jeffersonian natural aristocracy around the world will rise. Why? AI doesn't

take your job. AI makes you the CEO reframe right AI makes you CEO because your job is actually a lot like using an AI model is a lot like CO training. You

know, many years ago, I used to say that and it's still true, but you know, when you're in high school, you could quickly see like why do people accept that athletes have very high compensation?

Because when you're high school, you could see whether you could dunk and if you can't dunk, you know that like Michael Jordan isn't outsourcing his dunks. He's dunking, right? So they that

dunks. He's dunking, right? So they that talent is intrinsic to the person. It is

a uh non-transferable asset, right?

Similarly, someone can tell whether they can sing or they look like a model, right? So, um the actors, the musicians,

right? So, um the actors, the musicians, the singers, the athletes, all of these clearly had talent and people were okay with their compensation. There was a CEO

who used to say, "Well, I deserve to get paid more than a second basement."

Okay, I forget this guy. He's like some tech guy in the 90s or something. It's a

funny line, right? Because he's like, "I add more value, right, to the world than this." But the issue is that people

this." But the issue is that people would think of what being CEO was as just sitting up with your feet on a desk barking outdoors. You know, people would

barking outdoors. You know, people would be like, "Oh, Elon, he just pays people to do his stuff. He doesn't launch the spaceships himself." Right? And that's

spaceships himself." Right? And that's

because they are only accustomed to like clicking a button on Amazon and spending money on Amazon and they they think that something that is simple for them was simple on the back end. Of course, it's

opposite, right? to make it simple is

opposite, right? to make it simple is really hard, right? And so to like get the top rocket scientists and car engineers and brain machine interface people and tunneling people and blah

blah blah blah blah and have them all compensated and working and directed and debugged is actually very very difficult as you know if you tried it. And guess

what? See the thing is that historically it's been the case that people couldn't try their hand at being CEO.

What they could do instead is they could try their hand just like they could try their hand at basketball or or football or they could, you know, pick up a microphone. They could try their hand at

microphone. They could try their hand at math and science and they could see how good they were at math and science. So the initial tech guys in the 90s and the 2000s, they were respected because they were good at math

and science, not because people, many people didn't perceive the business aspect. They still didn't really give

aspect. They still didn't really give credit on that. But page rank, for example, okay, it's IEN values. I can

like math guys, tech guys could perceive okay that was a difficult technical problem that must have been the value that they created. It's part of it but you know the manager part is actually more point being though that at least somebody could say okay these tech guys

are better at math and science than me therefore their compensation is merited.

Now however the thing is that bouncing a basketball or trying a math problem were cheap to make somebody manager of a company was expensive. So they couldn't try and fail. They could try and fail

playing basketball and see how much they sucked.

They could try and fail singing, see how much they suck. They could try and fail in math, see how much they sucked very cheaply in high school. They would learn their true ability level that they're not able to run like a bolt. They can't

sing like Adele, right? They can't do math like uh Terrence Tao, right? And

they'd say, you know what, I know where I am. I know my strengths and

I am. I know my strengths and weaknesses. I'm okay with that person

weaknesses. I'm okay with that person having more or having higher status because it was a fair competition. I got

a shot. It was cheap for me to try. But

because putting them in charge of an organization to make them CEO was expensive, many people persist in the delusion that the CEO adds nothing to the organization.

Right? And uh you know though it is say I will say the best cos and the worst cos have something very deep in common.

You know what that is?

What the organization can run without them because the very best CEOs set up a right a machine so that they don't have to micromanage it every day. That's

really hard to do because they need basically, you know, Gwin Shotwell running SpaceX is like Elon doesn't have to look at every single detail because she's so so so good, right? Like uh or

Vibe and and Tom Zoo on Tesla, like they're so good, right? But recruiting

junior Elons that are okay with not having the spotlight while Elon has a spotlight and takes all the flack, non-trivial to do. Go try it sometime, right? Find somebody who's more detail

right? Find somebody who's more detail oriented than Elon to run your company and you can be Elon, right? Okay. So

point being that um now what AI does it reduces the cost.

You're you know AI doesn't take your job. AI makes you a CEO. You're the CEO.

job. AI makes you a CEO. You're the CEO.

Now what is being CEO? It's writing up clear instructions of what you want sensing the market verifying the output and so on and so forth. What that means is all these people around the world like you know the calendarly founder is

Nigerian right? There's many founders

Nigerian right? There's many founders who are from countries that were quote poor countries or what have you, from India, from Latin America and so on.

Internet access means all of these smart people can get very far on zero resources. Very far, right? Because the

resources. Very far, right? Because the

cost of quote hiring someone is hyperlated. You can hire an AI to do it,

hyperlated. You can hire an AI to do it, right? To riff on that more. So AI

right? To riff on that more. So AI

doesn't take your job. AI makes you a CEO. Another one is AI doesn't take your

CEO. Another one is AI doesn't take your job. AI takes the job of the previous

job. AI takes the job of the previous AI. Claude took Chat GBT's job. Right.

AI. Claude took Chat GBT's job. Right.

Um just like midjourney you know took uh took Dolli's job took stable diffusion's job and you can syn systematize that.

What I literally have is I have a spreadsheet where I have AI coding tool, AI image tool, AI video tool like this and I have some subcategories like best tool for AI comics for AI graphics and

so on and so forth. And then in a given month I have the best uh model for that kind of thing in that month. So claude

code you know for example or midjourney for AI imagery. And then when that gets swapped out AI didn't take your job. AI

took the job the previous AI. So I'm

hiring the AI. I literally have the token budget. I have the budget for

token budget. I have the budget for those rows. And that is literally how

those rows. And that is literally how across an organization you say okay we've just fired you know codeex and we've hired claude right so AI doesn't take your job AI

takes the job of the previous AI a third version is um AI doesn't take your job AI lets you do any job a little bit right you can be a pretty

good artist you can be a pretty good musician you can it's like one of the things about being CEO as you know you often have to be like a six or a heaven

in many areas. Why? Because you have to be able to do the job well enough before you hire a specialist in that area, right? Before you have a chief designer,

right? Before you have a chief designer, you're the designer if you're the founder CEO, right? Before you have a CFO, you're the one who's on the hook.

Prepare the financials, prepare the returns or whatever, right? So, you have to be a generalist who's pretty good and in a pinch can do that role, can supervise that. That's why it's so hard.

supervise that. That's why it's so hard.

That's why being CEO is so much harder than any executive position. Okay? AI

helps you with that where you can get to a six or a seven. You can be like a generalist, but a specialist is usually needed for polish.

A specialist has a vocabulary. A

specialist can confirm the AI is making mistakes that it's hallucinating and so on and so forth. And again, people will constantly argue as to whether that will always be there or whether it'll go away or whether AI will raise a bar and then

you know now the new specialist is even more sophisticated with AI, right?

I want to zoom out a couple more talks before we go. One is the SAS apocalypse.

I'm curious what your mental model is for all these SAS companies. Um are they you know some people say hey they've no their modes have gone away. They've no

you know code mode no data mode no more UI mode and um now there's going to be AI native companies that sort of you know take up a big chunk of what what what they do like you know Figma you

know who we're invested I'm personally invested in you some people are bullish as an example just because it's founderled and and they'll continue to innovate. Some people say, "Hey, is

innovate. Some people say, "Hey, is there a role for a designer in the same way that there used to be? Now it

fundamentally changes and you know what what does that do to collaboration tool to tools like that?" What is your your thought on on the SAS apocalypse? Is

everybody on the conveyor belt on the way to the guillotine? Um h how do you think about that?

I don't think so because I think if they're smart then the thing that AI can't do is distribution, right? So if

you have notion, you have Figma, you have now replet and so and so forth.

You've got all these people and boom, you can ship with AI faster, you know, features to them, right? And um so in that sense, I don't believe in the SAS

apocalypse. I think you might still see

apocalypse. I think you might still see SAS under pressure from people who can clone the interface quickly. That is

true. I think people will build local versions. That is true. I think people

versions. That is true. I think people may not want their data on remote servers. they might want desktop

servers. they might want desktop versions with local data. So they can like for example uh Obsidian is going to become more of a contender versus notion because the markdown files there's a

network effect on data when it's local and you can analyze the whole thing like local data you get compounding data right so but so so in a in the naive

sense that oh anyone can clone anything and so therefore you know it it just doesn't work like that like if you set up if you cloned all of Facebook's code and you set up facebook.com

right? Or Instagram2.com. Who's going to

right? Or Instagram2.com. Who's going to log into that? Right? You could

literally have every single thing coded there, but your your ad rates are going to be far lower because no one's going to log into it, right? The distribution.

That's like a thought experiment to say if you just clone the whole thing, you still have to get the distribution for it. And so it's not just a cloning, it's

it. And so it's not just a cloning, it's execution. Now with that said like

execution. Now with that said like there's certain kinds of things like let's say Netswuite right which suck that but they're complicated where I think it is true that if they

suck at execution or or rather many say they suck like I hate the product put it like that right zero's better but you know like sorry Netswuite okay they're a big company you won't have your feelings hurt it's very rare that I ever say any

product sucks because um I don't hurt anybody's feelings so hopefully I didn't strike that from the record fine Netswuite's product could be improved.

Okay. Um so uh something like that which is like sort of a vulnerable incumbent that's just milking and that hasn't done anything for a while. Yes, I think they can get disrupted but I'm not sure that

it's like uh I don't think it's quite like oh everybody on Blackberry is going to die because iOS is taking over. I

don't think it's quite like that because I think AI can accelerate a SAS company just like it can accelerate a disruptor.

I think it kind of accelerates both.

Yeah. Well, one last thing that we'll get to Anthropic, uh, what happens if let's say Anthropic, you know, becomes a multi- trillion dollar company, right?

Um, like how much leverage do they have or just even private companies in general o over what is the relationship between them and governments? Are they

like hiring their own militaries at some some point? What does it look like when

some point? What does it look like when these companies become uh you know 10x bigger you know 50x when AI really achieves its its potential and these

companies are bigger than than the biggest countries.

So I think that at least that specific company while it executes very well um I am skeptical as to whether they're

executing well let's call it politically um and so because of that if they like ultimately at the very largest scale

markets are political like for example there's an entrepreneur they raised from a VC who raised from LP who's often a sovereign fund or a pension fund and they're under a state and they're under

the rulesbased order, right? So like

there's certain things that are at the macro level that you don't perceive because one thinks of them as concepts but become variables, I think that unless one is very very savvy that those

things could change. Like one thing I think about uh the Silicon Valley AI companies is they're actually scaler rather than vector thinkers.

They're only modeling AI disruption and they're not modeling all the other simultaneous singularities, all the political singularities that are happening, all things like, you know, solar mooning and stuff like that, right? And why are those things

right? And why are those things important? because they change the

important? because they change the leverage of political factions which in turns means their world model is incorrect because they're only if you're only extrapolating out AI and you're not trapping out all the other things that

are either going vertical or going down like this then uh they don't have a proper model of the future and that's that's as vague I I'll be

much more precise on my own blog um but that's says P or PG that's how I can say it Without pissing anybody off, just go to x.comgs and you'll see what I mean by

x.comgs and you'll see what I mean by that. Right? But TLDDR is I think the

that. Right? But TLDDR is I think the American AI companies as much as they've given to the world and I like them are only mo they are basically thinking all nation states continue to exist in their

current form and the only disruption is AI like they still model as America versus China for example. They don't

model internal things internal issues.

They think the reserve currency sticks around. They think all these things

around. They think all these things stick around, right? um they aren't taking a multivariat approach in my view that's their weakness they have so many strengths but that's their big weakness so I don't think that in that form

they're going to get to trillions in fact I think the counterattack on them is going to be so dramatic that it might be that you just have decentralized AI like American AI companies for example

the copyright stuff right there's a huge backlash building against that whereas the Chinese or the decentralized models can just do anything Hollywood anything

right potentially So the pirate bay kind of AI is actually more free. The less

profitable AI is also less copyright AI might be better AI, you know. So just

things to think about. I think uh you know things compound until they don't and they start hitting sigmoidal constraints and often backlash constraints like this, right? So I think that's what they're not modeling. Yeah.

Political constraints.

Makes sense. Okay. Let's let's get the zodal.

Zodal. All right. Now this is what I care about.

basically um you know AI is the attack but ZK is the defense. So what I mean by that is zero knowledge like you know

what the transformer is to AI zero knowledge is to cryptography and um zodal is a Zcash powered mobile wallet

um that is basically fully encrypted Bitcoin. Okay, this is 30 years of

Bitcoin. Okay, this is 30 years of cryptography. This is basically what

cryptography. This is basically what Milton Friedman wanted decades ago.

There's actually this great clip.

The one thing that's missing, but that will soon be developed, is a reliable ecash, a method whereby on the internet,

you can transfer funds from A to B without A knowing B or B knowing A. The

way in which I can take a $20 bill and hand it over to you and there's no record of where it came from.

And you you may get that without knowing who I am.

That kind of thing will develop on the internet and that will make it even easier for people to use the internet.

Basically that is what uh Milton Friedman predicted almost 30 years ago.

Okay, this is um uh in the '9s. Okay, it

was like when the internet was just rising and zodal is the incarnation of that. Okay. Because zero knowledge

that. Okay. Because zero knowledge proofs which basically mean anybody can prove anything without revealing anything else were developed and then they were commercialized in the form of

Zcash scaled with zero knowledge proofs for scaling um Ethereum and with ZK roll-ups and things like that. And then

they were made efficient so you could do them on mobile. And then finally Apple and Google lightened up on crypto apps on mobile. And so finally you can

on mobile. And so finally you can teleport arbitrary amounts of money around the world.

And so this round we just led this with uh you guys crypto melvosses um Paradigm

uh Coinbase uh Cureshi of Dragonfly as you know large fund um and you know a bunch of other uh uh great people. Um,

and the uh the reason that that Arthur Hayes also was uh uh you know former former BitMXio and so the reason that this is super super super important

there's only um you know you can click this you can install this on on web or not web on on iOS or Android right the reason this is so insanely important

there's really only five crypto assets that I've spent more than a thousand hours on Bitcoin Ethereum Salana USDC Zcash and I actually think Zcash is

maybe the most important of them in the years to come. Why? So, let me say at least my kind of thesis as of right now on fiat, gold, digital gold and digital

cash meaning Zcash, right? So, I think fiat will be around particularly among eastern states because eastern states are broadly higher trust. So that's not

just China, but it's like India and Southeast Asia, the ASEAN countries and so on. Bitcoin, so then physical gold,

so on. Bitcoin, so then physical gold, gold bricks are also very popular in the east. But the and and westerners often

east. But the and and westerners often like gold, but they'll buy the instrument like you know, right? And

there's gold. Tether, you know.io. So

Tether has a digital as as a goldbacked stable coin, which is actually at 3.7 billion. So that's cool. X Aut is pretty

billion. So that's cool. X Aut is pretty cool. You can check that out. You have

cool. You can check that out. You have

to trust Tether's redemption, but Tether's got a pretty good track record now over 10 years for with USDT and so on. So, XAT is cool. Fine. So, fiat will

on. So, XAT is cool. Fine. So, fiat will continue, I think, to have its role. Uh,

just like the desktop continues, you know, desktop continues, you know, 30 years later. Windows and Apple are still

years later. Windows and Apple are still releasing things. It's still valuable.

releasing things. It's still valuable.

Some of the actions moved away from it, but the desktop continues. Still a large business. So, fiat continues among

business. So, fiat continues among eastern states.

Gold, physical gold is more popular in the east because you can secure it more.

There's going to be more stability. X

aut may be what's popular in the west.

Now we come to Bitcoin. What is my view on Bitcoin as of 2026 March? Bitcoin has

become provable global institutional collateral.

Okay. I think Bitcoin is less of a currency for individuals now. has become

so accepted by institutions and so centralized with Black Rockck and Sailor and so on and so forth and Belli and many countries adopting it and whatnot that it has a unique thing. See, when

you say there's a certain number of gold bricks in Fort Knox, even giving a video of that can now be faked very very realistically with AI, right? But what

can't be faked is what Blly does where he posts, I have this public address with this much BTC and watch, I'm going to move it to this address, right? That is something which so long

right? That is something which so long as it's actually Blly's Twitter account which there's some degree of proof on that you know because it's been around preai or whatever so long as you believe that and that's the one piece you have

to believe because you have to start thinking about what is what am I what am I taking as a premise right he can post I have the coins at this address here's the address I'm going to move it to when I move it I have proven I have custody

it's proof of reserve right you can also sign a message coming from with that private key you don't have to even move it the point being that provable

global institutional collateral. Anybody

in the world, he can prove cheaply anybody the world that he has this amount of Bitcoin. You cannot do that for physical gold bricks.

In a lower trust world, especially in an online world, that's very valuable because everything gold audits, videos of gold audits can now be faked with AI.

But the provable global institutional collateral, now institutions can prove they have the BTC to each other.

Okay. And they can do so across borders.

So the transparency of Bitcoin in the sense of all assets are on chain becomes valuable. Now the thing about this is

valuable. Now the thing about this is with the advent of AI, chain analysis will be there for everybody, right?

Everybody can do blockchain analytics.

This is just like changing the balance of power. It used to be that only chain

of power. It used to be that only chain analysis could really do that at the scale that it can. Now it's becoming much easier to do. And so a lot of Bitcoin use will be deanonymized over

time.

And so if you're running a transparent blockchain, it becomes an institutional blockchain because it's just only an institution can survive that degree of transparency. Like individuals can't

transparency. Like individuals can't survive being tracked for everything.

But institutions are it's like a public company. It's supposed to be tracked,

company. It's supposed to be tracked, right? You know, like that. It's like

right? You know, like that. It's like

sort of it's because robust enough it's meant to be tracked in a certain way.

It's designed to be tracked, right? an

individual, a person is not meant to be public, but a corporation can be, right?

It's funny to put it this way. There's a

private individual, there's a private company, and there's a public company, but I guess you could say, "Oh, it's a public figure, but people don't like being public figures, but there's kind of an equivalent there, right? A public figure, maybe some of

right? A public figure, maybe some of their stuff is tracked, but they don't want everything to be tracked." A public company, maybe all their stuff is tracked. Fine.

tracked. Fine.

Provable global institutional collateral. There's another thing which

collateral. There's another thing which is that way of thinking about what Bitcoin is solved some of the major issues. Um quantum right which is Nick

issues. Um quantum right which is Nick Carter's put out these things on it.

Let's say Nick Carter is right and I think he might be right that quantum is an underappreciated threat. The Bitcoin

core developers aren't taking it seriously and even if it was something that they've rolled out tomorrow it would still be a multi-month migration process because ECDSA like the addresses you everybody has to manually send their

assets from one address to a new address. Okay. So you you can only do

address. Okay. So you you can only do whatever 100 thousand of people those assets can be moved in a given day.

However, if you look at Bitcoin rich list, Bitcoin is so topheavy, right, that it's got so these institutional addresses that you have to

do the math, but probably a few million addresses all moving their funds would move like 99% of the Bitcoin in a few days.

And so Bitcoin is digital gold actually is quantum resistant. It's Bitcoin is digital cash that isn't right. Meaning a

million like institutions all moving their assets can be done in a few days but a billion people all moving like five bucks or whatever can't be done in any reasonable amount of time. Okay. So

everybody who can't move then gets quantumdmed and anybody who can doesn't but all the assets are concentrated the big guys with me right and this also

extends to seizure like will all the centralized bitcoin on coinbas's servers seller servers etc get seized I think it's quite likely I think it eventually gets seized in some exigent circumstance

and so it becomes something that I think only an institutionally blessed thing can hold and send, right? Provable global

insert collateral. This is a different vision than what people wanted, but it's actually still a valuable thing. What it

leaves open is the individual digital cash case, right? Because gold is big bricks that are moved in brink trucks or the equivalent thereof infrequently large denominations between

institutions, right? It's like the

institutions, right? It's like the high-powered back-end money, right? It's

not really meant for individuals. Cash

is the opposite. It's me spent for individuals more than it's meant for institutions. So Zcash takes over the

institutions. So Zcash takes over the role of digital cash.

So that's fungeable, private, scalable with Tachion, which is coming quantum safe. Okay, which is also it's more

safe. Okay, which is also it's more quantum safe, right? So that's why and it's simple also. Zcash is probably not going to ever do smart contracts. It's

going to keep it really simple. Why?

Because like um you know if you take Bitcoin you can innovate in one direction which is programmability and that's Ethereum salon and so on. You

innovate in the other direction that's privacy and that's Zcash. To get to private programmability is actually stacking those two together and it's actually quite hard. It opens

up all these attack surfaces and so on.

So just scale Zcash first and then you know there's Aztec, there's Alo, there's all these other you know private smart contract chains. I wish them the best. I

contract chains. I wish them the best. I

want them have a non-zero sum view of the world. They're taking on a more

the world. They're taking on a more complicated problem.

In theory, they can just do the same thing Zcash is doing, which is private transactions. In practice, if you

transactions. In practice, if you remember Facebook in the 2000s, people said, why does Twitter exist? Facebook has

status update. Like one feature of Facebook is all of Twitter. Why does

Twitter exist? Sometimes that's a good argument, by the way. That's why, you know, like Steve Jobs told Drew Houston, Dropbox's just a feature, right? I mean,

Dropbox, it's funny. It's a great company. and so and so forth. But like

company. and so and so forth. But like

if Dropbox had if iCloud was Dropbox, it' probably be better. You know, app like both both would be better off.

iCloud is kind of eh Dropbox is Dropbox doesn't have as much distribution as if it was part of a big operating system kind of bundle. So sometimes people are half right, half wrong. Dropbox company,

but it might have been bigger in terms of percentage value if if uh if they had been Apple's cloud services basically, right? But okay, point is it's hard to

right? But okay, point is it's hard to say whether it's just a product or a feature, but my strong intuition is just like Twitter's simplicity made its own

thing, right? Simple, scalable, billion

thing, right? Simple, scalable, billion person digital private cash has been the dream for 30 years and we're finally there.

So zodal.com, install zotal.com. By the

way, I'm not a trader. I just don't care about trading. I'm early on platforms

about trading. I'm early on platforms and infrastructure. There's things you

and infrastructure. There's things you have to not care about in order to care about things. You have to not care about

about things. You have to not care about things. So very very very few things I

things. So very very very few things I talked about. Also Zcash has been around

talked about. Also Zcash has been around for 10 years. Like you know it's also even the the the toxic way setup ceremony that's gone like that got fixed cryptographically.

So it's unusual that it's been around 10 years got a security track record. It's

got a decentralized base of holders. The

cryptography works.

Love it. That's a a great place to wrap a wide-ranging conversation on what's happening in AI and crypto. Uh, as

always, biology, fantastic conversation.

Until next time.

Yes. And oh, by the way, if you're in Singapore, Malaysia or anywhere, come visit ns.com and network school and uh we're scaling and uh we'll talk about that too next time maybe.

Yeah, love to see all the all the progress there. Amazing what you guys

progress there. Amazing what you guys are doing. Excited to be involved in in

are doing. Excited to be involved in in a small way and uh yeah, until next time. Okay, thank you.

time. Okay, thank you.

Loading...

Loading video analysis...