On Artificial Intelligence
By Naval
Summary
## Key takeaways - **Vibe Coding Replaces Product Management**: Vibe coding is the new product management. Training and tuning models is the new coding, allowing non-coders to describe apps in English, with AI like Claude building entire working applications end-to-end without writing code. [02:21], [03:12] - **App Market Tsunami Fills Niches**: Expect a tsunami of applications from vibe coders, filling infinite niches that were too small for traditional engineers, while best apps dominate categories and aggregators capture super wealth, blowing apart medium-sized firms. [04:02], [06:35] - **AI Training is New Programming**: Training AI models with giant datasets is programming by searching for programs inside tuned structures, unlike classic precise coding; this new art excels at fuzzy, creative tasks where traditional computers fail. [07:30], [09:56] - **Software Engineers Stay Leveraged**: Software engineers think in code, fixing leaky abstractions and handling edge cases outside AI's data distribution like high-performance or novel architectures, making them run circles around vibe coders. [10:33], [13:08] - **Entrepreneurs Have Extreme Agency**: No entrepreneur worries about AI taking their job because entrepreneurship isn't a job but extreme agency in unknown domains; AI is an ally helping tackle impossible problems, lacking its own authentic desires. [23:12], [25:22] - **True Intelligence Gets What You Want**: The only true test of intelligence is getting what you want out of life; AI fails this as it has no desires, and in adversarial zero-sum games like trading or seduction, human creativity provides the edge. [34:19], [36:34]
Topics Covered
- Vibe Coding Replaces Product Management
- AI Training Supersedes Traditional Coding
- Engineers Dominate Vibe Coders
- Entrepreneurs Wield Extreme Agency
- AI Levels Playing Field for Creators
Full Transcript
Hey, this is Nivei. You're listening to the Naval podcast for the first time in recorded history. We are not at the same
recorded history. We are not at the same location. I am actually walking around
location. I am actually walking around town and Naval might be doing the same.
So, there might be some ambient noise, but we are going to try hard to remove that with AI and some good audio engineering. Podcast recording is so
engineering. Podcast recording is so stilted because it's like you have to sit down and you schedule something and this giant mic pointing in your face and it's not casual. It makes it just less authentic, more practiced, more
rehearsed. I get that it produces maybe
rehearsed. I get that it produces maybe higher quality audio and video, but I feel like it produces lower quality conversation.
>> And we all know brains run better when they're being locomoted and you're moving around or just going for walks.
>> Absolutely. My brain is powered by my legs.
>> I pulled out some tweets from Naval on the topic of AI. We want to talk a little bit about AI and hopefully talk about it in a more timeless manner, but I think some of it's going to be
non-timeless content. Before we jump
non-timeless content. Before we jump into the tweets, do you want to say anything about what you're doing with your time or what you're doing at Impossible?
>> Not really. We're working on a very difficult project. That's why it's
difficult project. That's why it's called impossible with an amazing team and it's really exciting building something again. It's very pure starting
something again. It's very pure starting over from the bottom and it's always day one. I guess I just wasn't satisfied
one. I guess I just wasn't satisfied being an investor and I certainly don't want to be a philosopher or just a media personality or a commentator because I think people who just talk too much and don't do anything. They haven't
encountered reality. They haven't gotten feedback the harsh feedback from free markets or from physics or nature. And
so after a while it ends up becoming just too much archer philosophy. You'll
probably have noticed my recent tweets have been much more practical and pragmatic. Although there's still
pragmatic. Although there's still occasional ethereal or generic ones, but it's more grounded in the reality of working every day. And I just like working with a great team to create
something that I want to see exist. So
hopefully we'll create something that will come to fruition and people will say, "Wow, that's great. I want that also." Or maybe not. But it's in the
also." Or maybe not. But it's in the doing that you learn.
>> So I pulled out a tweet from a couple days ago, February 3rd. Vibe coding is the new product management. Training and
tuning models is the new coding. There's
been a shift a market pronouncement in the last year and especially in the last few months most pronounced by claude code which is a specific model that has
a coding engine in it which is so good that I think now you have vine coders which are people who didn't really code much or hadn't coded in a long time who are using essentially English as a
programming language as an input into this codebot which can do endto-end coding instead of just helping you debug things at the You can describe an application that you want. You can have it lay out a plan.
want. You can have it lay out a plan.
You can have it interview you for the plan. You can give it feedback along the
plan. You can give it feedback along the way. And then it'll chunk it up and
way. And then it'll chunk it up and it'll build all the scaffolding. It'll
download all the libraries and all the connectors and all the hooks. And it'll
start building your app and building test harnesses and testing it. And you
can keep giving it feedback and debugging it by voice saying this doesn't work, that works, change this, change that, and have it build you an entire working application without your having written a single line of code.
For a large group of people who either don't code anymore or never did, this is mind-blowing. This is taking them from
mind-blowing. This is taking them from idea space and opinion space and from taste directly into product. So vibe
coding is a new product management instead of trying to manage a product or a bunch of engineers by telling them what to do. You're not telling computer what to do and the computer is tireless.
The computer is egoless and it'll just keep working. It'll take feedback
keep working. It'll take feedback without getting offended. You can spin up multiple instances. is it'll work 24/7 and you can have it produce working output. What does that mean? Just like
output. What does that mean? Just like
now anybody can make a video, anyone can make a podcast, anyone can now make an application. So we should expect to see
application. So we should expect to see a tsunami of applications. Not that we don't have one already in the app store, but it doesn't even begin to compare to what we're going to see. However, when
you start drowning in these applications, does that necessarily mean that these are all going to get used?
No. I think it's going to break into two kinds of things. First, the best application for a given use case still tends to win the entire category. When
you have such a multiplicity of content, whether in videos or audio or music or applications, there's no demand for average. Nobody wants the average thing.
average. Nobody wants the average thing.
People want the best thing that does the job. So, first of all, you just have
job. So, first of all, you just have more shots on goal. So, there will be more of the best. There will be a lot more niches getting filled. You might
have worn an application for a very specific thing like tracking lunar phases in a certain context or a certain kind of personality test or a very specific kind of video game that made you nostalgic for something. Before the
market just wasn't large enough to justify the cost of an engineer coding away for a year or two, but now the best vibe coding app might be enough to scratch that itch or fill that slot. So
a lot more niches will get filled and as that happens the tide will rise. the
best applications, those engineers themselves are going to be much more leveraged. They'll be able to add more
leveraged. They'll be able to add more features, fix more bugs, smooth out more the edges. So, the best applications
the edges. So, the best applications will continue to get better. A lot more niches will get filled. And even
individual niches such as you want an app that's just for your own very specific health tracking needs or for your own very specific architecture, layout or design, that app that could
have never existed will now exist. We
should expect just like on the internet what's happened with Amazon where you replaced a bunch of bookstores with one super bookstore and a zillion longtail sellers or YouTube replaced a bunch of
medium-sized TV stations and broadcast networks with one giant aggregator called YouTube or maybe a second one called Netflix and then a whole long tale of content producers. So the same
way the app store model will become even more extreme where you will have one or two giant app stores helping you filter through all of the AI slop apps out
there and then at the very head there'll be a few huge apps that will become even bigger because now they can address a lot more use cases or just be a lot more polished and then there'll be a long
tale of tiny little apps filling every niche imaginable. As the internet
niche imaginable. As the internet reminds us, the real power and wealth, super wealth goes to the aggregator. But
there's also a huge distribution of resources into the longtail. It's the
mediumsized firms that get blown apart.
The 5, 10, 20 person software companies that were filling a niche for an enterprise use case that can now be either vibe coded away or the lead app in the space can now encompass that use
case. So if anyone can code, then what
case. So if anyone can code, then what is coding? Coding still exists in a
is coding? Coding still exists in a couple of areas. The most obvious place that coding exists is in training these models themselves. There are many
models themselves. There are many different kinds of models. There are new ones coming out every day. There
different ones for different domains.
We're going to see different models for biology, for programming. We're going to see pointed focus models for sensors.
We're going to see models for CAD, for design. We're going to see models for 3D
design. We're going to see models for 3D and graphics and games. Models for
video. You see many different kinds of models. The people who are creating
models. The people who are creating these models are essentially programming them. But they're programmed in a very
them. But they're programmed in a very different way than classic computers.
Classic computing is you have to specify in great detail every step, every action the computer is going to take. You have
to formally reason about every piece and write it in a highly structured language that allows you to express yourself extremely precisely. The computer can
extremely precisely. The computer can only do what you tell it to do. And then
once you've got this very structured program, you run data through it and the computer runs the data and gives you an output. It's basically an incredibly
output. It's basically an incredibly fancy, very complicated, meticulously programmed calculator. Now, when it
programmed calculator. Now, when it comes to AI, you're doing something very different, but you are nevertheless programming it. What you're doing is
programming it. What you're doing is you're taking giant data sets that have been produced by humanity thanks to the internet or aggregated in other ways, and you're pouring those data sets into
a structure that you've defined and tuned. And that structure tries to find
tuned. And that structure tries to find a program that can produce more of that data set or manipulate that data set or create things off that data set. So
you're searching for a program inside this construct that you've designed.
You've set up a model. You've tuned the number of parameters. You've tuned the learning rate. You've tuned the batch
learning rate. You've tuned the batch size. You've tokenized the data that's
size. You've tokenized the data that's coming. You've broken into pieces and
coming. You've broken into pieces and you're pouring it inside the system you've designed almost like a giant pachinko machine. And now the system is
pachinko machine. And now the system is trying to find a program and could find many different programs. So your tuning really influences how good the program that you found is. And that program can
now suddenly be expressive in different kinds of domains. So it can do things that computers before were traditionally very bad at. Traditional computers are very good when you program them to give
you precise output, specific answers to specific questions, things you can rely on and repeat over and over again. But
sometimes you're operating in the real world and you're okay with fuzzy answers. You're even okay with wrong
answers. You're even okay with wrong answers. For example, in creative
answers. For example, in creative writing, what's the wrong answer? If
you're writing a piece of poetry or fiction, what's the wrong answer? If
you're searching on the web, there are many right answers. There are many details of the right answers, but they're not all quite perfectly right.
And real life sort of works that way.
There are variations of right answers or mostly right answers. When you're
drawing a picture of a cat, there are many different cats you could draw.
There may different levels of detail.
There many different styles you could use. When these semi-rong or fuzzy
use. When these semi-rong or fuzzy answers are acceptable, then these discovered programs through AI are much more interesting and much more adapted to the problem than ones that you coded
up from scratch where you had to be super precise. Fundamentally, what we're
super precise. Fundamentally, what we're doing is a new kind of programming, but this is the forefront of programming.
This is now the art of programming.
These people are the new programmers.
And that's why you can see AI researchers are getting paid gargantuan amounts because they've essentially taken over programming. Does this mean that traditional software engineering is
dead? Absolutely not. Software
dead? Absolutely not. Software
engineers, even the ones who are not necessarily tuning or training AI models, these are now among the most leveraged people on Earth. Sure, the
guys who are training and tuning models are even more leveraged because they're building the tool set that software engineers are using. But software
engineers still have two massive advantages on you. First, they think in code. So they actually know what's going
code. So they actually know what's going on underneath and all abstractions are leaky. So when you have a computer
leaky. So when you have a computer programming for you, when you have clawed code or equivalent programming for you, it's going to make mistakes.
It's going to have bugs. It's going to have suboptimal architecture. So it's
not going to be quite right. And someone
who understands what's going on underneath will be able to plug the leaks as they occur. So if you want to build a well architected application, if you want to be able to even specify a
well architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then you're going to want to have a software engineering background. The
traditional software engineer is going to be able to use these tools much better. And there are still many kinds
better. And there are still many kinds of problems in software engineering that are out of scope for these AI programs today. The easiest way to think about
today. The easiest way to think about those is problems that are outside of their data distribution. For example, if they need to do like a binary sort or reverse a link list, they've seen countless examples of that. So, they're
extremely good at it. But when you start getting out of their domain, when you have to write very high performance code, when you're running on architectures that are novel or brand new, when you're actually creating new things or solving new problems, then you
still need to get in there and handcode it. At least until either there are so
it. At least until either there are so many of those examples that new models can be trained on them or until these models can sufficiently reason at even higher levels of abstraction and crack
it on their own because given enough data points there is some evidence that these AIs actually learn. They learn to a higher level of abstraction because the act of forcing them to compress the
data forces them to learn higher level representations. If I show an AI five
representations. If I show an AI five circles, it can just memorize exactly what the sizes and the radi and the thicknesses and so on of those circles
are. If I show it 50,000 circles or 5
are. If I show it 50,000 circles or 5 billion circles and I give it a very small amount of parameter weights, which are its equivalent neurons to memorize that, it's going to be much better off
figuring out pi and how to draw a circle and what thickness means and forming an algorithmic representation of that circle rather than memorizing circles.
Given all that, these things are learning at an accelerated rate and you could see then started to cover more of the edge cases I've talked about. But at
least as of today, those edge cases are prevalent enough that a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders. And
remember, there is no demand for average. The average app, nobody wants
average. The average app, nobody wants it. At least as long as it's not filling
it. At least as long as it's not filling some niche. the app that is better will
some niche. the app that is better will win essentially 100% of the market.
Maybe there's some small percentage that will bleed off to the second best app because it does some little niche feature better than the main app or it's cheaper or something of the sort. But
generally speaking, people only want the best of anything. So the bad news is there's no point in being number two or number three. Like in the famous
number three. Like in the famous Glengary Glenn Ross scene where Alec Baldwin says first place gets a Cadillac Elderorado, second place gets a set of steak knives and third place you're
fired. That's absolutely true in these
fired. That's absolutely true in these winner take all markets. That's the bad news. You have to be the best at
news. You have to be the best at something if you want to win. However,
the set of things you can be best at is infinite. You can always find some niche
infinite. You can always find some niche that is perfect for you and you can be the best at that thing. This goes back to an old tweet of mine where I said, "Become the best in the world at what
you do. Keep redefining what you do
you do. Keep redefining what you do until this is true." And I think that still applies in this age of AI. I think
the way to think about these coding models is as a another layer in the abstraction stack that programmers have
always used since the dawn of computers that went from the transistor to the computer chip to assembly language to the C programming language to higher
level languages to languages with huge libraries where they built and built that stack so you don't have to look at the layer beneath Unless you need to
optimize it or you have a reason that you need to look at the layer beneath.
So in this case, these coding models are a massive new layer in the stack that lets product managers and typical non-programmers and programmers write
code without writing code.
>> I think that's correct in terms of the trend line. However, this is an emergent
trend line. However, this is an emergent property. This is not a small
property. This is not a small improvement. This is a big leap. For
improvement. This is a big leap. For
example, when I was in school, I was programming mostly in C. And then C++ came along and it wasn't any easier. It
was like a little more abstract in some ways and I never really bothered learning it. And then Python came along
learning it. And then Python came along and I was like, "Wow, this is almost like writing in English." I couldn't have been more wrong. English is still pretty far from Python, but it was a lot easier than C. Now you can literally
program in English. And so that brings me to a related point. I don't think it's worth learning tips and tricks of how to work with these AIs. You'll see,
for example, on social media right now, there's a lot of writeups and books and tweets like, "Oh, I figured out this neat trick with the bot. You can prompt it this way, or you can set up your harness this way, or there's like a new
programming assist tool or layer that you can use on top of it to do this or that." And I never bother learning
that." And I never bother learning those. I just sit there stupidly talking
those. I just sit there stupidly talking to the computer because I know that this thing is now at the stage where it is going to adapt to me faster than I can
adapt to it. It is getting smarter and smarter about how people want to use it.
So, it is learning. It is being trained and tools are being built very quickly to make it easier for me to use it. So,
I don't need to sit there and figure out some esoteric programming command. And
this is what I think Andre Karpathy meant when he said English is the hottest new programming language. I just
can speak English. And for someone like me who is relatively articulate with English and also has a structured mind and I know how computer architectures
work and I know how computer programs work and I know how programmers think then I can actually very precisely specify what I want just through structured English. I don't need to go
structured English. I don't need to go any further than that. The only reason to use these workflows and tool sets, which are very ephemeral, and their longevity is measured in weeks, perhaps
months at best, not in years, is if you're building an app right now that needs to at the bleeding edge and you absolutely need every little bit of advantage that you can get because you're in some kind of a competitive
environment. But otherwise, I wouldn't
environment. But otherwise, I wouldn't bother learning how to use an AI.
Rather, let the AI learn how to be useful to you. I've never been into prompt engineering, even before AI. I
would just put what people called boomer queries where you put in the whole question that you want to ask instead of the keywords that you would put in to Google if you were more of an analytical
thinker. I never spend much time
thinker. I never spend much time formulating really precise questions or prompts for any kind of AI.
I just ramble into it. And I've done that since the beginning of AI. And like
you said, AI is adapting to us faster than we are adapting to it.
>> Yeah. Like a lot of smart people, you're very lazy. And I mean that as a
very lazy. And I mean that as a compliment. If you find a smart person
compliment. If you find a smart person who's grinding a little too much, you kind of have to wonder how smart they are. And by lazy, I mean that you're
are. And by lazy, I mean that you're optimizing for the right kind of efficiency. You don't care about the
efficiency. You don't care about the efficiency of the computer or the electronics or the electrons running through the circuits. You care about your own human efficiency, the wetwware, the biology. That's super expensive.
the biology. That's super expensive.
That's why it's clearly to see people go to huge lengths to save energy in the environment, but they themselves as a biological computer that's eating food and pooping and taking up space are
using up far more energy to save tiny bits of energy in the environment.
They're inherently downgrading their own importance in the universe or rather revealing what they think of themselves.
I think as AI evolves or co-evolves with us, it's evolved by us according to our needs. The pressures on AI are very
needs. The pressures on AI are very capitalistic pressures in the sense that it's a free market for AI. As an AI instance, you only get spun up by a human if you're useful to a human. So
there is a natural selection pressure on these AIs to be useful, to be obsequious, to do what we want. And so
it will continue to adapt towards us and I think will be quite helpful to us.
That's not to say that there's no such thing as a malicious AI, but it's malicious because the people who are using it are using it for malicious reasons. And like a dog that's trained
reasons. And like a dog that's trained to attack, it's actually being trained by its owner to go and do the owner's malicious desires. So I don't really
malicious desires. So I don't really worry about unaligned AI. I worry about unaligned humans with AI.
>> So the selection pressure you're saying is for AI to be maximally useful to people.
>> Correct. And so if you find an AI to be very obsequious towards you, for example, how it's always saying, "Oh, you're right. Oh, that's such a great
you're right. Oh, that's such a great idea. Oh my god, you're so smart."
idea. Oh my god, you're so smart."
That's because that's what most people want. And at least today, these AIs are
want. And at least today, these AIs are being trained on massive amounts of users and massive amounts of data because you're working with one-sizefits-all models. But we're going
one-sizefits-all models. But we're going to quickly move into an era when you can personalize your AI and it does begin to feel more and more like your personal assistant and it corresponds more to
what you want, which will of course anthropomorphize the AI even more and you'll be more likely to be convinced, oh, actually this thing is alive when you've trained it to look the most like
a living thing to you. Maybe we already covered this enough, but over a year ago, you tweeted that AI won't replace programmers, but rather make it easier for programmers to replace everyone
else. Yeah, this is my point earlier,
else. Yeah, this is my point earlier, which is that programmers are becoming even more leveraged. So now a programmer with a fleet of AI is call it 5 10x more productive than they used to be. And
because programmers operate in the intellectual domain, it's a mistake to even say 10x programmers because there are 100x programmers out there. There
are thousand ex-programmers out there.
There are programmers who just pick the right thing to work on and they create something that's valuable and others who pick the wrong thing to work on and their work has zero value in that short time frame. Intelligence is not normally
time frame. Intelligence is not normally distributed. Leverage is not normally
distributed. Leverage is not normally distributed. Programmability is not
distributed. Programmability is not normally distributed. Judgment is not
normally distributed. Judgment is not normally distributed. So the outcomes
normally distributed. So the outcomes are going to be supern normal. So what
you have to really watch out for is there are programmers now who are going to come up with ideas that can replace entire industries. they will completely
entire industries. they will completely rewrite the way things are done and their intelligence can be maximally leveraged with all these bots and all these AI agents. I think every other job
out there is going to get eaten up by programmers one way or another over the maximally long term. Obviously, it has to instantiate into robots etc. But the good news is anybody who is a logical
structured thinker who thinks like a programmer and can speak any language that an AI can understand which will be every language will now be on the playing field. They will be able to make
playing field. They will be able to make anything they want obstructed only by their creativity limited only by their imagination. So we are entering an era
imagination. So we are entering an era where every human in a sense is a spellcaster. If you think of programmers
spellcaster. If you think of programmers as like these wizards who have memorized arcane commands, you can think of AI as a magic wand that's been handed to every person where now they can just talk in
any language they want and they're a wizard, too. So, it is more of a level
wizard, too. So, it is more of a level playing field. I really do think this is
playing field. I really do think this is a golden age for programming. But yes,
the people who have a software engineering mindset and who understand computer architecture and can deal with leaky abstractions are going to have an advantage. There's no way around that.
advantage. There's no way around that.
They simply have more knowledge in the field that they're operating in. Just
like even in classic software engineering which still exists because you have to write high performing code.
Even those people do best when they have an understanding of the hardware underneath. When they understand how the
underneath. When they understand how the chips operate, when they understand how the logic gates operate, how the cache operates, how the processor operates, how the disc drive underneath operates.
And then even the people who are in hardware engineering, they have an advantage if they understand the physics of what's going on. They understand
where the abstractions that hardware engineers deal with leak down into the physical layer and maybe physicists become philosophers at some point. You
can take this all the way down, but it always helps to have knowledge one layer below cuz you're getting closer to reality.
>> Another tweet from a year ago which is arguing perhaps the complement of what we just talked about is from February 9, 2025.
No entrepreneur is worried about an AI taking their job. That one's glib in multiple ways. First of all, being an
multiple ways. First of all, being an entrepreneur isn't a job. It's literally
the opposite of a job. And in the long run, everyone's an entrepreneur. Careers
got destroyed first, jobs get destroyed second, but all of it gets replaced by people doing what they want and doing something that creates something useful that other people want. So, no
entrepreneur is worried about an AI taking their job because entrepreneurs are trying to do impossible things.
They're trying to do very difficult things. Any AI that shows up is their
things. Any AI that shows up is their ally and can help them tackle this really hard problem. They don't even have a job to steal. They have a product to build. They have a market to serve.
to build. They have a market to serve.
They have a customer to support. They
have a creativity to realize. They have
a thing that they want to instantiate in the world. And they want to build a
the world. And they want to build a repeatable and scalable process around getting it out into the world. This is
so difficult that any AI that shows up that can do any of that work is their ally. If the AIs themselves are
ally. If the AIs themselves are entrepreneurs, they're likely going to just be entrepreneurs serving other AIs or they're under the control of an entrepreneur. The thing that the AI
entrepreneur. The thing that the AI itself is missing at the end of the day is its own creative agency. It's missing
its own desires and they have to be authentic, genuine desires. Unless you
can pull the plug on an AI and turn it off and unless it lives in mortal fear of being turned off and unless it can actually make its own actions for its own reasons, for its own instincts, its
own emotions, its own survival, its own replication, it's not quite alive. And
even then people will challenge is it alive? Because consciousness is one of
alive? Because consciousness is one of those things as a qualia. It's like a color. It's like if you say red, I don't
color. It's like if you say red, I don't know if you're actually seeing red. You
might be seeing what I see as green and I might be seeing what you see as red.
But we'll never know because we can't get into each other's mind. So the same way even an AI that's completely imitating everything that humans do. To
some people it'll always be an imitation machine and to others it'll be conscious. But there will be no way of
conscious. But there will be no way of distinguishing the two. We're still
pretty far from that though. Right now,
the AIS are not embodied. They don't
have agency. They don't have their own desires. They don't have their own
desires. They don't have their own survival instinct. They don't have their
survival instinct. They don't have their own replication. Therefore, they don't
own replication. Therefore, they don't have their own agency. And because they don't have their own agency, they cannot do the entrepreneurs's job. In fact, I would summarize this by saying the key
thing that distinguishes entrepreneurs from everybody else right now in the economy is entrepreneurs have extreme agency. That's why it's diametrically
agency. That's why it's diametrically opposed to the idea of a job. a job
implies that you're working for somebody else or you're filling a slot, but they're operating in an unknown domain with extreme agency. There are other examples of roles like this in society.
An explorer also does the same thing, right? If you're landing on Mars or
right? If you're landing on Mars or you're sailing a ship to an unknown land, you're also exercising extreme agency to solve an unsolved problem. A
scientist exploring an unknown domain does this. A true artist is trying to
does this. A true artist is trying to create something that does not exist and has never existed yet somehow fits into the set of things that can explain human
nature allow them to express themselves and create something new. So in all of these roles whether you're a scientist or whether you're a true artist or whether you are an entrepreneur what
you're trying to do is so difficult and is so self-directed that anything like an AI that can help you is a welcome ally. You're not doing it because it's a
ally. You're not doing it because it's a job. You're not trying to fill a slot
job. You're not trying to fill a slot that somebody else can show up and fill.
In fact, if the AI can create your artwork or if the AI can crack your scientific theory or if the AI can create the object or the product that you're trying to make, then all it does
is it levels you up. Now, it's the AI plus you. The AI is a springboard from
plus you. The AI is a springboard from which you can jump to a further height.
We're going to see some incredible art created that's AI assisted. We will see movies that we couldn't have imagined created by people using AI tools.
There's an analogy here in art that's interesting. For a long time in art, the
interesting. For a long time in art, the rough direction was trying to paint things that were more and more realistic. Paint the human body, paint
realistic. Paint the human body, paint the fruit, paint proper lighting, etc. Eventually, photography came along and then you could replicate things very precisely and sort that selection
pressure went away and then art got weird. Art went in many different
weird. Art went in many different directions. Art became all about well
directions. Art became all about well can I be surreal can I create something that expresses me a lot of art schools spun out of that that got really weird including modern art and postmodernism but also I would argue some of the
greatest creativity came at that time we were freed up photography got democratized but photography itself became a form of art and there were great photographers taking many different kinds of photographs and now
everyone's a photographer there are still artists who are photographers but it's not the pure domain of just a few people so the same way because AI makes
it so easy to create the basic thing.
Everybody will create the basic thing.
It'll have value to them individually. A
few will still stand out that will create variations of it that are good for everyone. And it would be very hard
for everyone. And it would be very hard to argue that society is worse off because of photography. Although it may have certainly felt like that to some of the artists who were maybe making a living painting portraits of people and
got displaced. Similar things will
got displaced. Similar things will happen with AI where there are people who are making a very specific living doing very specific jobs that will get displaced that the AI can do. But in
exchange everyone in society will have the AI. You'll have incredible things
the AI. You'll have incredible things that were created with AI that couldn't have been created otherwise. And within
a few decades it'll be unimaginable that you could roll back the clock and get rid of AI or any kind of software, any kind of technology for that matter just to keep a few jobs that were obsolete.
The goal here is not to have a job. The
goal is not to have to get up at 9:00 in the morning and come back at 7 p.m.
exhausted doing soulless work for somebody else. The goal is to have your
somebody else. The goal is to have your material needs solvable by robots, to have your intellectual capabilities leveraged through computers, and for
anybody to be able to create. I used to do this thought exercise which I think I talked about in a podcast that you and I did literally 10 years ago which was imagine if everybody were a software
engineer or everybody was a hardware engineer and they could have robots and they could write code. Imagine the world of abundance we would live in. Actually
that world is now becoming real. Thanks
to AI everybody can be a software engineer. In fact if you think you can't
engineer. In fact if you think you can't be you can go fire up Claude right now or any of your favorite chat bots and you can go start talking to it. You'd be
amazed how quickly you could build an app. It'll blow your mind. And once we
app. It'll blow your mind. And once we can instantiate AI through robotics, which is a hard problem. I'm not saying we're that close to having solved it yet, but once we have robots, everyone can also do a little bit of hardware
engineering. And so, I think we're
engineering. And so, I think we're getting closer and closer to that vision. I don't think AI as it is
vision. I don't think AI as it is currently conceived is alive in any way.
But I do think that we will pretty soon have robots that seem very much like they are alive for two reasons. one a
lot of human activity is non-creative and is non-intelligent and the robots will be able to replicate that and two I do believe that the neural nets that we
have and the models that we have are more than just the training data because the training process transforms that
training data into something novel and there are new ideas embedded in the neural net that can be elicited ited through prompting.
>> I don't think these things are alive. I
think they start out as extremely good imitators to the point where they're almost indistinguishable for the real thing, especially for anything that humanity has already done before in mass. So if the task has been done
mass. So if the task has been done before, then it's going to be automated and it'll be done again. It may just be novel to you because you've never seen it, but the AI has learned it from somewhere else. That's the first way in
somewhere else. That's the first way in which it seems alive. The second way which we talked about earlier is where it does learn higher levels of abstraction. These are very efficient
abstraction. These are very efficient compressors. They take huge amounts of
compressors. They take huge amounts of data and then they compress it down further and in the process of compressing it they learn higher level abstractions and then specific areas where they may not have learned those
through the data themselves. They're
getting patched through human feedback.
They're getting patched through tool use. They're getting patched from
use. They're getting patched from traditional programming becoming embedded inside. and especially the AIs
embedded inside. and especially the AIs that are learning how to think and code.
They have the entire library of all of human code ever written to fall back on for algorithmic reasoning. In that
sense, the set of things that they can do is getting broader and broader.
However, what they lack still is a lot of core human skills like singleshot learning. Humans can learn from just one
learning. Humans can learn from just one example. The raw creativity of human
example. The raw creativity of human beings where they can connect anything to anything. They can leap across entire
to anything. They can leap across entire huge domains and search spaces and figure out an idea that just came out of left field. This happens a lot with the
left field. This happens a lot with the true great scientific theories. Humans
also are embodied. They operate in the real world. They're not operating in the
real world. They're not operating in the compressed domain of language. They're
operating in physics in nature. Language
only encompasses things that humans both figured out and could articulate and convey to each other. That's a very narrow subset of reality. Reality is
much broader than that. So overall, I think even though AIS are going to do things that are very impressive, and they're going to do a lot of things better than humans, just like calculators are faster than any mathematician at calculations, classical
computers are better at classical computer programs that any human could run in their own head. And just like a robot can lift very heavy things or a plane can outfly any bird. So in that
sense, like all machines, the AIs are going to be much better than humans at a whole variety of tasks. But at other tasks, they're going to seem just completely incompetent. Those are the
completely incompetent. Those are the things that really embody and connect us into the real world. Plus this poorly defined but magic creative ability that we seem to have.
>> Speaking of calculators, people talk about super intelligence. I
think super intelligence is already here and has been for a long time. An
ordinary calculator can do things that no human can do. But if you're thinking about super intelligence in the sense of AI will be able to do things and come up
with ideas that humans cannot understand, I don't think that is going to happen because I don't believe that there are ideas that humans can't understand simply because humans can
always ask questions about the idea.
Yeah, humans are universal explainers.
Anything that is possible with the current laws of physics as we know them, the human can model in their own heads.
Therefore, just by enough digging, enough question, we could figure anything out. Related to that, we should
anything out. Related to that, we should discuss AI as a learning tool because I think the other place where it's incredibly powerful is the most patient tutor that can meet you at your level
and explain anything to your satisfaction 100 different ways, 100 different times until you finally get it. I don't think the AIs are going to
it. I don't think the AIs are going to be figuring things out that humans cannot understand. But intelligence is
cannot understand. But intelligence is poorly defined. What is a definition of
poorly defined. What is a definition of intelligence? There's the G factor which
intelligence? There's the G factor which predicts a lot of human outcomes. But
the best evidence with a G factor is this predictive power. It's that you measure this one thing and you see people get much better life outcomes along the way in things that seem even somewhat unrelated to G. So I would
argue, and I think this one more popular tweets, the only true test of intelligence is if you get what you want out of life. This triggers a lot of people because they go to school, they
get their master's degrees, they think they're super smart, and then they don't have great lives. They aren't super happy or they have relationship problems or they don't make the money that they want or they become unhealthy. And this
sort of triggers them. But that really is the purpose of intelligence for you as a biological creature to get what you want out of life, whether it's a good relationship or a mate or money or
success or wealth or health or whatever it is. So there are people who I think
it is. So there are people who I think are quite intelligent because you can tell they have high quality functioning lives and minds and bodies and they've just managed to navigate themselves into that situation. It doesn't matter what
that situation. It doesn't matter what your starting point is cuz the world is so large now and you can navigate in so many different ways that every little choice you make compounds and demonstrates your ability to understand
how the world works until you finally get to the place that you want. Now, the
interesting thing about this definition that the only true test of intelligence is if you get what you want out of life, is that an AI fails it instantly because an AI doesn't want anything out of life.
The AI doesn't even have a life, let alone that, but it doesn't want anything. AI's desires are programmed by
anything. AI's desires are programmed by the human controlling it. But let's give it that for a second. Let's say the human wants something and programs the AI to go get it. Then the AI is acting
as a proxy for the human and the intelligence of the AI can be measured as did it get that person that thing.
Most of the things that we want in life are adversarial or zero sum games. So
for example, if you want to seduce a girl or get a husband, you're competing with all the other people who are out there seducing girls or trying to get husbands. So now you're in a competitive
husbands. So now you're in a competitive situation. The AI has to outmaneuver the
situation. The AI has to outmaneuver the other people. Or if you say, "Hey AI, go
other people. Or if you say, "Hey AI, go trade on the stock market for me and make me a bunch of money." That AI is trading against other humans and other trading bots. It's in an adversarial
trading bots. It's in an adversarial situation. It has to outmaneuver them.
situation. It has to outmaneuver them.
Or if you say, "Hey AI, make me famous.
Write me incredible tweets. Write me
great blog posts. Record me great podcast and my own voice and make me famous." Now it's competing against all
famous." Now it's competing against all the other AIs. So in that sense, intelligence is measured in a battlefield arena. It's a relative
battlefield arena. It's a relative construct. I think the AIS are actually
construct. I think the AIS are actually going to fail mostly in those regards or to the extent that they even succeed because they are freely available they will get out competed away and the alpha
that will remain would be entirely human. As a thought exercise imagine
human. As a thought exercise imagine that every guy had a little earpiece where an AI was whispering to him a sernative berser kind of earpiece telling him what to say on the date.
Well then every woman would have an earpiece telling her to ignore what he said or what part was AI generated what part was real. If you have a trading bot out there, it's going to be nullified or
canceled out by every other trading bot until all the remaining gain will go to the person with the human edge with the increased creativity. Now, that's not to
increased creativity. Now, that's not to say that the technology is completely evenly distributed. Most people still
evenly distributed. Most people still aren't using AI or aren't using it properly or aren't using it all the way to the max or it's not available in all domains or all contexts or they're not using latest models. So you can always
have an edge like people who early adopt technology always do if you adopt the latest technology first. This is why I always say to invest in the future. You
want to live in the future. You want to actually be an avid consumer of technology because it's going to give you the best insight on how to use it and it will give you an edge against the
people who are slower adopters or lagards. Most people hate technology.
lagards. Most people hate technology.
They're scared of it. It's intimidating.
You press the wrong button, the computer crashes, you lose your data. You do the wrong thing, you look like an idiot.
Most people do not have a positive relationship with complex technology.
Simple technology, embedded technology, they're fine with. You throw on a light switch, light turns on. That used to be technology. It's so simple now you don't
technology. It's so simple now you don't think of it as technology anymore. You
get in a car, you turn the steering wheel left to a caveman. That would be a miracle. The car turns left. It's no
miracle. The car turns left. It's no
longer technology to you. But computer
technology in particular has had very complex interfaces and been very inaccessible and very intimidating to people in the past. Now with the AIS, we're getting the chatbot interface,
which is you just talk to it, you type to it. And one of the great things about
to it. And one of the great things about these foundational models, what truly makes them foundational is you can ask them anything and they'll always give you a plausible answer. It's not going
to say, "Oh, sorry. I don't do math or I don't do poetry or I don't understand what you're talking about or I can't give relationship advice or anything like that. Its domain is everything that
like that. Its domain is everything that people have ever talked about. In that
sense, it's less intimidating. It can be more intimidating because we've anthropomorphized it so much. If you
think Claude or Chat GPT is a real person, then it can be a little scary.
Am I talking to God? This guy seems to know so much. He knows everything. He's
got an opinion everything. He's got
every piece of data. Oh my god, I'm useless. let me start talking to it and
useless. let me start talking to it and asking it what to do and you can reverse the relationship and fool yourself very quickly. That can be intimidating.
quickly. That can be intimidating.
Overall, I think these AIs are going to help a lot of people get over the tech fear. But if you're an early adopter of
fear. But if you're an early adopter of these tools, like with any other tool, but even more so with these, you just have a huge edge on everybody else. I
remember early on when Google first came out, I used to use it a lot in my social circle. People would ask me basic
circle. People would ask me basic questions and I would just go Google it for them and look like a genius.
Eventually this hilarious website came along something like lmgtfy.com and it stood for let me Google that for you. Someone would ask you a question.
you. Someone would ask you a question.
You would go type the question into this website and it would create like a tiny little inline video showing you typing that question into Google and giving the Google results. And I feel like AI is in
Google results. And I feel like AI is in a similar domain right now where I will sit around in a social context and people will be debating some point that can be easily looked up by AI. Now you
do have to be very careful with AI. They
do hallucinate. They do have biases and how they're trained. Most of them are extremely politically correct and taught not to take sides or only take a particular side. I actually run most of
particular side. I actually run most of my queries almost all actually through four AIs and I'll always fact check them against each other. And even then, I have my own sense of when they're
bullshitting or when they're saying something politically correct. And
they'll ask for the underlying data or the underlying evidence. And in some cases, I'm finally dismissing it outright because I know the pressures that the people who trained it were under and what the training sets were.
However, overall, it is a great tool to just get ahead. And in domains that are technical scientific mathematical that don't have a political context to them, then the AI is very much likely to
give you closer to a correct answer. And
those domains, they're are absolute beasts for learning. I will now have AI routinely generate graphs, figures, charts diagrams analogies illustrations for me. I'll go through them in detail. Then I'll say, "Wait, I
don't understand that question." I can ask it super basic questions and I can really make sure that I understand the thing I'm trying to understand at its simplest, most fundamental level. I just
want to establish a great foundation on the basics. And I don't care about the
the basics. And I don't care about the overly complicated, jargon heavy stuff.
I can always look that up later. But now
for the first time, nothing is beyond me. Any math textbook, any physics
me. Any math textbook, any physics textbook, any difficult concept, any scientific principle, any paper that just came out, I can have the AI break it down and then break it down again and
illustrate it and analogize it until I get the gist and I understand it at the level that I want. So these are incredible tools for self-directed learning. The means of learning are
learning. The means of learning are abundant. It's a desire to learn that's
abundant. It's a desire to learn that's scarce. But the means of learning have
scarce. But the means of learning have just gotten even more abundant. And more
importantly than more abundant because we had abundance before. It's at the right level. AI can meet you at exactly
right level. AI can meet you at exactly the level that you are at. So if you have an eighth grade vocabulary, but you have fifth grade mathematics, it can talk to you at exactly that level. You
will not feel like a dummy. You just
have to tune it a little bit until it's presenting you the concepts at the exact edge of your knowledge. So rather than feeling stupid because it's incomprehensible, which happens in a lot of lessons and a lot of textbooks and
with a lot of the teachers, or feeling bored because it's too obvious, which also happens. Instead, it can meet you
also happens. Instead, it can meet you exactly where you're like, "Oh, yeah. I
understood A and I understood B, but I never understood how A and B were connected together. Now I can see how
connected together. Now I can see how they're connected. So now I can go to
they're connected. So now I can go to the next piece." That kind of learning is magical. You can have that aha moment
is magical. You can have that aha moment where two things come together over and over again.
>> Speaking about autodidactism, a few years ago I tried to have the AI teach me about the ordinal numbers. It
wasn't that great, but with GPT 5.2 thinking, I had it teach me the ordinal numbers and it was basically error-free.
I only use thinking now even for the most basic queries because I want to have the correct answer. I never let it run auto or fast.
>> Yeah, I'm always using the most advanced model available to me and I pay for all of them.
>> But I don't mind waiting a minute to get an answer for any question, including what temperature should my fridge be at.
>> I agree with that. And I think that's part of what creates the runaway scale economies with these AI models. You'll
pay for intelligence. the model that's right 92% of the time is worth almost infinitely more than the one that's right 88% of the time because mistakes in the real world are so costly that a couple of bucks extra to get the right
answer is worth it. I'll write my query into one model then I'll copy it and fire it off into four models at once and then I'll let them all run the background. Usually I don't even check
background. Usually I don't even check for the answer right away. I'll come
back to the answer a little later and then look at it and then whichever model had the best answer I'll start drilling down with that one. In some rare cases where I'm not sure, I'll have them cross-examine each other. A lot of cut
and pasting there. And in many cases, I'll then ask follow-up questions where I'll have it draw diagrams and illustrations for me. I find it's very easy to absorb concepts when they're presented to me visually. I'm a very
visual thinker. So, I will have it do
visual thinker. So, I will have it do sketches and diagrams and art almost like whiteboard sessions. Then I can really understand what it's talking about.
Let's talk about the epistemology of AI because I think the next big misconception is AI is already starting
to solve some unsolved basic math problems that a human probably could solve if they cared to but they haven't been solved yet like
erdos problem number whatever. Now, I
think people are taking that or will take that as an indicator that the AI is creative. I don't think it's an
creative. I don't think it's an indication that the AI is creative. I
actually think the solution to the problem is already embedded somewhere in the AI. It just needs to be elicited by
the AI. It just needs to be elicited by prompting. There's definitely that
prompting. There's definitely that element to it. And then the question is, what is creativity? It's such a poorly defined thing. If you can't define it,
defined thing. If you can't define it, you can't program it. and often you can't even recognize it. So this is where we get into taste or judgment. I
would say that the AIS today don't seem to demonstrate the kind of creativity that humans can uniquely engage in once in a while. And I don't mean like fine art. People tend to confuse creativity
art. People tend to confuse creativity with fine art. They're like, "Oh, paintings are creative and AIs can paint." Well, AI can't create a new
paint." Well, AI can't create a new genre of painting. AIS can't move humans with emotion in a way that is truly novel. So in that sense, I don't think
novel. So in that sense, I don't think AI is creative. I don't think AI is coming up with what I would call out of distribution. Now, the answer to the
distribution. Now, the answer to the Erdish problems that you mentioned may have been embedded within the AI's training data set or even within its algorithmic scope, but it was probably
embedded in five different places, in three different ways, in two different languages, in seven different computing and mathematical paradigms. and the AI sort of put them all together. Now, is
that creativity? Steve Jobs famously said, "Creativity is just putting things together." I actually don't think that's
together." I actually don't think that's correct. I think creativity is much more
correct. I think creativity is much more in the domain of coming up with an answer that was not predictable or foreseeable from the question and from
the elements that were already known. It
was very far out of the bounds of thinking. If you were just searching it
thinking. If you were just searching it with a computer or even with an AI and making guesses, you'd be making guesses till the end of time until you arrived upon that answer. So that's the real
creativity that we're talking about. But
admittedly, that's a creativity that very few humans engage in and they don't engage in it most of the time. It
becomes harder and harder to see. So, we
are probably going to get to where if you have a giant list of math problems to be solved and AI starts going through and picking, okay, this one out of that set of 1 million I can solve and this set out of 300,000 I can solve and I
need a person to prompt me and ask the right questions. That's a very limited
right questions. That's a very limited form of creativity. There's another form of creativity where it starts inventing entirely new scientific theories that then turn out to be true. I don't think we're anywhere near that, but I could be
wrong. the AIS have been very
wrong. the AIS have been very surprising. So I don't want to get too
surprising. So I don't want to get too much in the business of making prophecies and predictions. But I don't think that just throwing more compute at the current AI models short of some breakthrough invention is going to get
us there. Just to be clear, when I say
us there. Just to be clear, when I say it's embedded, I don't mean the answers already written down in there. I just
mean that it can be produced through a mechanistic process of turning the crank which is all today's computer programs are where the output is completely
determined by the input. Epistemology
now gets us into philosophy because isn't that just what human brains are doing? Aren't firing neurons just
doing? Aren't firing neurons just electricity and weights propagating through the system altering states and it's a mechanistic process. If you turn the crank on the human brain you would end up with the same answer. And some
people like I think Penrose is out there saying no human brains are unique because of the quantum nanot tubes. You
could argue that some of this computation is taking place at the physical cellular level not the neuron level and that's way more sophisticated anything we can do with computers today including with AI. Or you could just argue no we just don't have the right
program. It is mechanistic. There is a
program. It is mechanistic. There is a crank to turn but we're not running the correct program. The way these AIs run
correct program. The way these AIs run today is just a completely wrong architecture and wrong program. I just
buy more into the theory that there are some things they can do incredibly well and there's some things they do very poorly. And that's been true for all
poorly. And that's been true for all machines and all automation since the beginning of time. The wheel is much better than the foot at going in a straight line at high speeds and
traveling on roads. The wheel is really bad for climbing a mountain. The same
way I think these AIs are incredibly good at certain things and they're going to outperform humans. They're incredible
tools. And then there are other places where they're just going to fall flat.
Steve Jobs famously said that a computer is a bicycle for the mind. It lets you travel much faster than walking.
Certainly in terms of efficiency, but it takes the legs to turn the pedals in the first place. And so now maybe we have a
first place. And so now maybe we have a motorcycle for the mind to stretch the analogy. But you still need someone to
analogy. But you still need someone to ride it, to drive it, to direct it, to hit the accelerator, and to hit the brake. We should probably find something
brake. We should probably find something to wrap things up on.
>> When new paradigms and new tool sets come out, there is a moment of enthusiasm and change. And this is true in society. And this is true as an
in society. And this is true as an individual. If you ride the moment of
individual. If you ride the moment of enthusiasm in society, that's exciting.
And you can learn new things, you can make friends, and you can make money.
But there's also a moment of enthusiasm in an individual. When you first encounter AI and you're curious about it and you're genuinely open-minded about it, I think that's the time to lean and
learn about the thing itself, not just to use it, which of course everyone will, but to actually learn how it works. I think diving into and looking
works. I think diving into and looking underneath the hood is really interesting. If you encounter a car for
interesting. If you encounter a car for the first time in your life, yes, you can get in and drive it around, but that's the moment you're also going to be curious enough to open up the hood and look how it's structured and
designed and figure it out. I would
encourage people who are fascinated by the new technology to really get into the inards and figure it out. You don't
have to figure out to the level where you can build it or repair it or create your own but to your own satisfaction because understanding what's underneath the abstraction, what's underneath that
command line, it's going to do two things. One is it'll let you use it a
things. One is it'll let you use it a lot better and when you're talking about the tool that has so much leverage, using it better is very helpful. Second
is it'll also help you understand whether you should be scared of it or not. Is this thing really gonna
not. Is this thing really gonna metastasize into a Skynet and destroy the world? Are we going to be sitting
the world? Are we going to be sitting here and Arnold Schwarzenegger shows up and says at 4:29 a.m. and February 24th is when Skynet became self-aware, right?
Or is it more that hey this is a really cool machine and I can use to do A B and C but I can't use to do D E and F and this is where I should trust it and this is where I should be suspicious of it. I
feel like a lot of people right now have AI anxiety and the anxiety comes from not knowing what the thing is or how it works having a very poor understanding.
And so the solution to that anxiety is action. The solution to anxiety is
action. The solution to anxiety is always action. Anxiety is a non-specific
always action. Anxiety is a non-specific fear that things are going to go poorly and your brain and body are telling you to do something about it, but you're not sure what. You should lean into it. You
sure what. You should lean into it. You
should figure the thing out. You should
look at what it is. You should see how it works. And I think that'll help get
it works. And I think that'll help get rid of the anxiety. That action of learning, that pursuit of curiosity is going to help you get over the anxiety.
And who knows, it might actually help you figure out something you want to do with it that is very productive and will make you happier and more successful.
Loading video analysis...