黃仁勳最新重磅專訪:AI 代理時代正來...|Jensen Huang: The Era of AI Agents Is Coming...
By New SciTech 新科技
Summary
## Key takeaways - **Agentic AI drives 10,000x compute surge in 2 years**: Jensen Huang states that moving from generative to reasoning required about 100x more computation, and moving from reasoning to agentic required another 100x, meaning in just two years computation went up by a factor of 10,000x. This exponential growth in compute demand underpins the AI infrastructure buildout. [22:26], [22:49] - **Open Claw defines a new personal AI computer paradigm**: Jensen describes Open Claw as fundamentally defining a computer through four elements: memory system (scratch), skills, resources management, and scheduling. He calls it the 'personal artificial intelligence computer for the very first time' and the 'blueprint the operating system of modern computing.' [15:13], [15:45] - **$50B factory can produce cheapest tokens via 10x throughput**: Jensen argues that a $50 billion factory will generate the lowest cost tokens because it produces tokens at 10x the efficiency. He explains that the difference between a GPU at 1x price versus 0.5x price represents only a small percentage of a $50 billion data center's total cost, which includes power, cooling, storage, and networking anyway. [07:56], [08:43] - **Deep specialization is the only lasting moat in AI era**: Jensen advises entrepreneurs that their moat is 'deep specialization' - knowing their vertical better than anyone else. He predicts every enterprise software company will become a value-added reseller of AI model tokens, and those who connect their agents with customers first will build the strongest flywheel. [57:40], [58:20] - **Robotics reaching mass deployment in 3-5 years**: Jensen predicts that from high-functioning existence proof, it takes 'a couple two three cycles' - essentially three to five years - to reach reasonable products. He notes China is 'formidable' due to world-leading microelectronics, motors, and rare earth magnets essential for robotics. [52:53], [53:25] - **Radiologist prediction proves AI augments rather than replaces**: Jensen recounts how a leading computer scientist predicted computer vision would eliminate radiologists, but 10 years later radiologists have increased in number because AI made scans faster and cheaper, enabling more scans and expanding the market. He uses this to argue against AI doomerism. [01:03:20], [01:04:16]
Topics Covered
- Expensive Factories Produce the Cheapest Tokens
- OpenClaw Is the New Operating System of Computing
- Compute Demand Has Grown 10,000x in Two Years
- Robots Will Be the Greatest Economic Equalizer Ever
Full Transcript
special episode this week. We've
preempted the weekly show and there's only three people we preempt the show for. President Trump, Jesus, and Jensen.
for. President Trump, Jesus, and Jensen.
[laughter] And uh I'll let you pick which order we do that. Uh but what an amazing run
do that. Uh but what an amazing run you've had and a great event. Uh
every industry is here. Every tech
company is here. Every AI company is here. Incredible. Incredible.
here. Incredible. Incredible.
Extraordinary. And one of the great announcements of the past year has been Grock. When you made the purchase of
Grock. When you made the purchase of Grock, did you realize how insufferable Cha Chimath would become? [laughter]
I had a I had an inkling that that that we're his friends. We have to deal with him every week.
I know it.
You had to deal with him for the six week close. [laughter]
week close. [laughter] I know it's like two weeks. Two weeks.
It's all coming back to me now. It's
it's making me rather uncomfortable. The
the thing is uh many of our strategies are are presented in in broad daylight at GTC years in advance of when we do
it. Two and a half years ago, I
it. Two and a half years ago, I introduced the operating system of the AI factory and it's called Dynamo.
Dynamo as you know is a piece of instrument, a machine that was created by Seammens to turn essentially water
into electricity. and Dynamo uh powered
into electricity. and Dynamo uh powered the factory of the last industrial revolution. So I thought it was the
revolution. So I thought it was the perfect name for the operating system of the next industrial revolution, the factory of that. And so inside Dynamo, the fundamental technology is
disagregated inference.
Jason, I I know you're you're super technical.
Absolutely.
I know it.
I'll let you take this one. Go ahead and define it for [laughter] the audience. I
don't want to step on you.
Yeah, thank you. I I I know you wanted to jump in there for a second, but it's it's this aggregating inference, which means the the pipeline, the processing pipeline of inference is extremely
complicated. In fact, it is the most
complicated. In fact, it is the most complicated computing problem today.
Incredible scale, lots of mathematics of different shapes and sizes. And we came up came up with the idea that you would change you would you would disagregate
parts of the processing such that some of it can run on some GPUs rest of it can run on different GPUs and that led to us realizing that maybe even
disagregated computing could make sense that we could have different heterogeneous nature of computing that same sensibility led us to melanox
you know today Nvidia's computing is spread across GPUs, CPUs, switches, scale up switches, scale out switches, networking processors, and now we're going to add Grock to that and we're
going to put the right workload on the right chips. You know, we just really
right chips. You know, we just really evolved from a GPU company to an AI factory company.
I mean, I think that was probably the biggest takeaway that I had. You're
seeing this fundamental disagregation where we've gone from a GPU and now you have this complexion of all these different options that will eventually exist. The thing that you guys said on
exist. The thing that you guys said on stage or you said on stage was I I would like the high value inference people to take a listen to this and 25% of your
data center space you said should be allocated to this Grock LPU GPU combo grock to about 25% of the verbs in the G in the data center. So can you tell us
about how the industry looks at this idea of now basically creating this next generation form of disagregated prefill decode disag?
Yeah and take a step back and at the time that we added this we went from large language model processing
to agentic processing. Now when you're running an agent you're accessing working memory you're accessing long-term memory. You're using tools.
long-term memory. You're using tools.
You're really beating up on storage really hard. You have agents working
really hard. You have agents working with other agents. Some of the agents are very large models. Some of them are smaller models. Some of them are
smaller models. Some of them are diffusion models. Some of them are auto
diffusion models. Some of them are auto reggressive models. And so there are all
reggressive models. And so there are all kinds of different types of models inside this data center. We created Vera Rubin to be able to run this extraordinarily diverse workload. My
sense is and and so we added what used to be a one rack company. we now added four more racks, right?
So, Nvidia's TAM, if you will, increased from what whatever it was to probably something, call it, you know, 33% 50%
higher. Now, part of that 33% or 50% a
higher. Now, part of that 33% or 50% a lot of it is going to be storage processors. It's called Bluefield. Some
processors. It's called Bluefield. Some
of it will be a lot of it, I'm hoping, will be Grock processors, and some of it will be CPUs. And they're all and a lot of it's going to be networking
processors. And so all of this is going
processors. And so all of this is going to be running basically the computer of the AI revolution called agents, right?
The operating system of of um modern modern industry.
What about embedded applications? So you
know my daughter's teddy bear at home wants to talk to her. What goes in there? Is it a custom ASIC or does there
there? Is it a custom ASIC or does there end up becoming much more kind of a broader set of TAM with developing tools that are maybe different for different use cases at the edge and an embedded
application? We think that there's three
application? We think that there's three computers in the problem at the at the largest at the largest scale when you stick take a step back. There's one
computer that's really about training the AI model, developing creating the AI, another computer for evaluating it.
Depending on the type of problem you're having, like for example, you look around, there's all kinds of robots and cars and things like that. You have to
evaluate these robots inside a virtual gym that represents the physical world.
So it has to be software that obeys the laws of physics. And that's a second computer. We call that omniverse. The
computer. We call that omniverse. The
third computer is the computer at the edge, the robotics computer.
That robotics computer, one of them could be self-driving car. Another one's
a robot. Another one could be a teddy bear. Little tiny one for a teddy bear.
bear. Little tiny one for a teddy bear.
One of the most important ones is one that we're working on that basically turns the telecommunications base stations into part of the AI
infrastructure. So now all of the it's a
infrastructure. So now all of the it's a $2 trillion industry. All of that in time will be transformed into an extension of the AI infrastructure. And
so radios radios will become a edge devices, factories, warehouses, you name it. And so so there are three these
it. And so so there are three these three basic computers.
All of them, you know, are going to be necessary.
Jensen, last uh last year, I think you were ahead of the the rest of the world in in saying inference isn't going to a thousand.
Just last year, yes.
Is it is it going to 1 millionx is going to 1 billionx? Yeah.
Right. And I think people at the time thought it was pretty hyperbolic because the world was still focused on pre-scaling, on training. Here we are now. Inference has exploded. We're
now. Inference has exploded. We're
inference constrained. um you announced an inference factory that I think is leading edge that's going to be 10x better in terms of throughput to the next factory but yet if you if I listen
to what the chatter is out there it's that your inference factory is going to cost 40 or 50 billion and the alternatives the custom AS6 AMD others are going to cost 25 to 30 billion and
you're going to lose share so why don't you talk to us what are you seeing how do you think about share and does it make sense for all these folks to pay something that's a 2x X premium to what
others are marketing.
The big takeaway, the big idea is that you should not equate the price of the factory and the price of the tokens, the
cost of the tokens. It is very likely that the $50 billion factory, and in fact, I can prove it that the $50 billion factory will generate for you
the lowest cost tokens. And the reason for that is because we produce these tokens at extraordinary efficiency 10 times you know the difference between
50 billion now it turns out 20 billion is just land power and shell right right right and then on top of that you have storage anyways networking anyways you got CPUs anyways you got servers anyways you got
cooling anyways the difference between that GPU being 1x price or halfx price is not between 50 billion and 30 billion Pick your favorite number, but let's say
between 50 billion and 40 billion.
That is not a large percentage when the $50 billion data center is actually 10 times the throughput.
Right, Jess?
That's the reason why I said that even for most chips, if you can't keep up with the state of the technology and the pace that we're running, even when the chips are free, it's not cheap enough.
Yeah.
Can I can I just ask a general strategy question?
Yeah. I mean, you're running the most valuable company in the world. This
thing is going to do 350 plus billion of revenue next year. 200 billion of free cash flow. It's compounding at these
cash flow. It's compounding at these crazy rates. How do you decide what to
crazy rates. How do you decide what to do? Like, how do you actually get the
do? Like, how do you actually get the information? I mean, it's famous now
information? I mean, it's famous now these sort of emails that are people are meant to send you, but how do you really decide to get an intuition of how to shape the market, where to really double down, where to maybe pull back, where to
actually go into a green field? How how
does that information get to you? How do
you decide these things?
In a final analysis, that's the job of the CEO. Yeah.
the CEO. Yeah.
And our job is to define the strategy, define the vision, define the strategy.
We're informed, of course, by amazing computer scientists, amazing technologists, great people all over the company, but we have to shape that future. Well, part of it has to do with
future. Well, part of it has to do with is this something that's insanely hard to do? If it's not hard to do, we should
to do? If it's not hard to do, we should back away from it. And the reason for that is if it's easy to do, obviously, um, lots of competitors. a lot of competitors. Is this something that has
competitors. Is this something that has never been done before that's insanely hard to do and that somehow taps into the special superpowers of our company?
And so I have to find this confluence of things to that meets the standard and in the end we also know that a lot of pain and suffering is going to go into it.
Yeah.
There no great things that are invented because it was just easy to do and just like first try here we are. And so if it's super hard to do, nobody's ever done it before. it's very likely that you're going to have a lot of pain and
suffering and so you better enjoy it.
So can you can you just look at maybe three or four of the more longtail things you announced and just talk about the long-term viability of whether it's the data centers in space or whether it's what you're trying to do with ADAS and autos
or you know what you're trying to do on the biology side just give us a sense of like how you see some of these curves inflecting upwards in some of these longer tail businesses.
Excellent. uh physical AI large category we believe and I just mentioned we have three computing systems all the software platforms on top of it physical AI as a
large category it's technology industry's first opportunity to address a $50 trillion industry that
has largely been you know void of technology until now and so we need to invent all of the technology necessary to do that I felt that that was a 10-year journey We started 10 years ago. We're seeing it
inflecting now. It is a multi-billion
inflecting now. It is a multi-billion dollar business for us. It's close to10 billion a year now. And so it's a big business and it's growing exponentially.
And so that's number one. I think in the case of digital biology, I think we are literally near the chat GPT moment of digital biology. We're about to
digital biology. We're about to understand how to represent genes, proteins, cells. We already know how to
proteins, cells. We already know how to understand chemicals. And so the ability
understand chemicals. And so the ability for us to represent and understand the dynamics of the building blocks of biology that's a couple of two three
five years from now in 5 years time I completely believe that the healthcare industry where digital biology is going to inflect and so these are a couple of the really great ones and you could see they're all around us agriculture
agriculture reflecting now no question yeah Jensen I want to take you from the data center to the desktop uh the company was built in large part on hobbyists, video
gamers and and all those graphic cards in the beginning. And you mentioned in front of I think 10,000 people here just claude openclaw clawed code and what a
revolution agents have become and specifically the hobbyists who are really where a lot of energy um we see you know a lot of the innovation breaks
want desktops. You announced one here uh
want desktops. You announced one here uh I believe it's the Dell 6800. Uh this is a very powerful workstation to run local models. 750 gigs of RAM. Obviously the
models. 750 gigs of RAM. Obviously the
the Mac uh studio sold out everywhere in my company. We're moving to openclaw
my company. We're moving to openclaw everything. Freeberg just got claw
everything. Freeberg just got claw pelled. You got claw pelled I
pelled. You got claw pelled I understand. And you're obsessed with
understand. And you're obsessed with these.
What is this from the streets movement of creating open-source agents and using open source on the desktop mean to you?
Great. Where is that going?
Yeah. So great. First of all, let's take a step back. Um in the last two years we saw basically three inflection points.
The first one was generative chat GPT brought AI to the common everybody to our awareness. But the fact of the matter is the technology sat in
plain sight months before GPT. It wasn't
until chat GPT put a user interface around it made it easy for us to use that generative AI took off. Now
generative AI as you know generates tokens for internal consumption as well as external consumption. Internal
consumption is thinking which led to reasoning. 01 and 03
reasoning. 01 and 03 continue that wave of chat GPT grounded information made AI not only answer questions but answer questions in a more
grounded way useful.
We started seeing the revenues and the e the economic model of open AI start to inflect. Then the third one was only
inflect. Then the third one was only inside the industry that we saw clock code the first agentic system that was very useful really revolutionary stuff
but but cloud code was only available for enterprises. Most people outside
for enterprises. Most people outside never saw anything about cloud code until open claw. Open claw basically put
into the po popular consciousness what an AI agent can do. Mhm.
That's the reason why open claw is so important from a cultural perspective.
Now the second second reason why it's so important is that open claw is open but it formulates
it structures a type of computing model that is basically reinventing computing all together. It has a memory system. It
all together. It has a memory system. It
scratch is a short-term memory file system. It has it has it has scales. Did
system. It has it has it has scales. Did
you say skills or scales?
Skills.
Oh, skills.
They do have skills theoretically. Yeah.
Yeah. Skills.
So, the first thing first thing it it, you know, it has resources. It it
manages resources. It's it does scheduling.
Yep.
Right. And it cron jobs. It could it could spawn off agents. It could, you know, it could decompose a task and and cause and solve problems as does scheduling. It has IO subsystems. It
scheduling. It has IO subsystems. It could, you know, input. It has output and connect to WhatsApp. And also it has a API that allows it to run multiple types of applications called skills.
Yeah.
These four elements fundamentally define a computer.
Yeah.
And therefore what do we have? We have a personal artificial intelligence computer for the very first time.
Open source.
It's open source. It runs literally everywhere. And so this is now the this
everywhere. And so this is now the this is the op this is basically the blueprint the operating system of modern computing.
Yeah.
And it's going to run literally everywhere. Now of course one of the
everywhere. Now of course one of the things that we had to help it do is whenever you have agentic software you have to make sure that and agentic software has access to sensitive
information. It execute code. It could
information. It execute code. It could
communicate externally. We have to make sure that all of it has to be governed.
all of it has to be secure and that we have policies that that gives these agents two of the three things but not all three things at the same time and so the governance part of it we
contributed to Peter Peter Steinberger was here and and so we've got a mountain of great engineers working with him to help secure and keep that thing so that it could protect our privacy protect our security
Jensen that paradigm shift makes some of the AI legislation that has passed around the country to regulate AI and a lot of the proposed legislation effectively moot, doesn't it? Can you
just comment for a second on how quickly the paradigm shift kind of obiates a lot of the models for regulatory oversight of AI, which is becoming a very hot topic in politics right now.
Well, this is this is the part that that we just with policy makers, we need to we need to always get in front of them and Brad, you do a great job doing this.
We had to get in front of them and inform them about the state of the technology, what it is, what it is not.
It is not a biological being. It is not alien. It is not conscious.
alien. It is not conscious.
Um it is computer software.
Yeah. Exactly.
And and it is not something that um we say things like we don't understand it at all.
It is not true. We don't understand at all. We understand a lot of things about
all. We understand a lot of things about this technology. and and so so I think
this technology. and and so so I think one we have to make sure that we continue to inform the policy makers and not affect not allow dumerism and
extremism to affect how policy makers think and understand about this technology. However, however, we still
technology. However, however, we still have to recognize technology is moving really fast and don't get policy ahead of the technology too quickly. And the
risk that we we run as a nation, our greatest source of national security concern with respect to AI is that other countries adopt this technology while we
are so angry at it or afraid of it or somehow paranoid of it that our industries, our society don't take advantage of AI. So I'm just mostly worried about the diffusion of AI here
in United States.
Can you just double click if you were in the seat in the boardroom of anthropic over that whole scuttlebutt with the department of war? It sort of builds on this idea of people didn't know what to
think. It's sort of added to this layer
think. It's sort of added to this layer of either resentment or fear or just general mistrust that people have sometimes at the software levels of AI.
What would do you think you would have told Daario and that team to do maybe differently to try to change some of this outcome and some of this perception?
The first thing that I I would I would say about Anthropic is first of all the technology is incredible. We are a large consumer of anthropic technology really admire their focus on security really
admires their focus on safety. Um the
the the the culture by which we they went about it the the technology excellence by which they went about it really fantastic. Um I I would say that
really fantastic. Um I I would say that that the the desire to warn people about the capability of the technology is is also uh really terrific. We just have to
make sure that we understand that the world has a spectrum and that that warning is good, scaring is less good, right? [laughter]
right? [laughter] Um and because this technology is too important to us, right? And and I think that it is fine
right? And and I think that it is fine to uh predict the future but we need to be a little bit more circumspect. We
need to have a little bit more humility that in fact we can't completely predict the future and the abil and to say things that that are quite extreme quite
catastrophic that there's no evidence of it happening um could be more damaging than people think. And and of course we are technology leaders uh there were
there was a time when nobody listened to us. Yeah.
us. Yeah.
Um but now because technology is so important in the social fabric such an important industry so important to national security our words do matter and I think we have to be much more
circumspect. We have to be more
circumspect. We have to be more moderate. We have to be more balanced.
moderate. We have to be more balanced.
We have to be more for more thoughtful.
Well I you know I would nominate you. I
think the industry's got to get together. 17% popularity of AI in the
together. 17% popularity of AI in the United States. I mean, we see what
United States. I mean, we see what happened to nuclear, right? We basically
shut down the entire nuclear industry and now we have a 100 fision reactors being built in China and zero in the United States. Um, we hear about
United States. Um, we hear about moratoriums on data centers. So, I think we have to be a lot more proactive about that. But, but I want to go back to this
that. But, but I want to go back to this agentic explosion that you're seeing inside your company, the efficiencies, the productivity gains inside your company. There's a lot of debate whether
company. There's a lot of debate whether or not we're seeing ROI, right? and you
and I entering into into this year, the big question was, are the revenues going to show up? Are the revenues going to scale like intelligence? And then we had this kind of Oenheimer moment, a five6
billion month by Anthropic in February.
Um, do you think as you look ahead, you announced a trillion dollar, you know, visibility into a trillion dollars of just Blackwell and Vera Rubin over the course of the next couple years. When
you see this happening at Anthropic and Open AI, do you think we're on that curve now where we're going to see revenues scale in the way that intelligence is scaling?
When you look around when you I'll answer this a couple different ways.
When you look around this audience, you will see that anthropic and open AI is represented here. But in fact, everybody
represented here. But in fact, everybody 99% of everything that is here is all AI and it's not anthropic and open AI.
Right. Right.
And the reason for that is because AI is very diverse.
I would say that the second most popular model as a category is open models.
Number one is yeah open open source open ways open source.
Open AI is number one. Open source is number two. Very distant. Third is
number two. Very distant. Third is
anthropic. And that tells you something about the scale of all of the AI companies that are here. And so, so it's important to recognize recognize that.
Um, let me let me come back and say a couple things. One, when we went from
couple things. One, when we went from generative to reasoning, the amount of computation we needed was about a hundred times.
When we went from reasoning to agentic, the computation is probably another hundred times. Now we're looking at in
hundred times. Now we're looking at in just two years computation went up by a fact 10,000x.
Meanwhile, people pay for information, but people mostly pay for work.
Yes.
Talking to a chatbot and getting an answer is super great, right?
Helping me do some research, unbelievable. But getting work done,
unbelievable. But getting work done, I'll pay for indeed.
And so that's where we are. Agentic
systems get work done. They're helping
our software engineers get work done.
And and so then you take that, you got 10,000x more compute, you get probably at this point 100x more consumption now.
Yes.
Yeah.
And we haven't even started scaling yet.
We are absolutely at a millionx which is I think a great place to talk about the number of people have 20 30,000 at the company something.
We have 43,000 employees. you know, I would say 38,000 are engineers.
The conversation we've had on the pod a number of times is, "Oh my god, look at the token usage in our companies. It is
growing massively." And some people are asking, "Hey, when I join a company, how many tokens do I get cuz I want to be an effective employee?" And you postulated,
effective employee?" And you postulated, I believe, during your 2 and 1/2 hour keynote, pretty long keynote, well done, that you were spending, if it was well done, it would be
shorter. Yeah. He didn't have time to do
shorter. Yeah. He didn't have time to do a He didn't have time to write an hour 45. So you guys So you guys know So you
45. So you guys So you guys know So you guys know there is no practice and so it's a gripping and rip and rip.
Yeah. Yeah.
So So I just want to let you know I was writing the speech while I was giving the speech. Okay. [laughter] So
the speech. Okay. [laughter] So
you never know.
But does that mean if we do back I apologize back envelope math 75,000 in tokens for each engineer or something like that.
So, are you spending in Nvidia a billion2 billion on tokens for your engineering team right now?
We're trying to. Let me give you a thought experiment. Let's say you have a
thought experiment. Let's say you have a software engineer or AI researcher and you pay them $500,000 a year. We do that all the time.
Yeah.
Okay. This is happening all over the time. Um, that $500,000 engineer at the
time. Um, that $500,000 engineer at the end of the year, I'm going to ask them how many tok spend in tokens. If that
person said $5,000, I will go ape something else.
Yes.
Right. If that if that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply
alarmed. Okay? And this is no different
alarmed. Okay? And this is no different than one of our chip designers who says, "Guess what? I'm just going to use paper
"Guess what? I'm just going to use paper and pencil. I don't think I'm going to
and pencil. I don't think I'm going to need any CAT tools."
This is a real paradigm shift to start thinking about these all-star employees.
It almost reminds me of of what we learned in the NBA when LeBron James started spending a million dollars a year just on his health of his body like and maintaining it. That's right.
Here he is at age 41 still playing. It
really is, hey, if these are incredible knowledge workers, why wouldn't we give them superhuman abilities?
That's exactly where does that go? If we if we extrapolate out two or three years from now, what is the efficiency of that allstar at an Nvidia and what they're able to accomplish? What do they look
like? Well, first of all, things that
like? Well, first of all, things that that that um wow, this is too hard. That
thought is gone. Uh this is going to take a long time. That thought is gone.
Uh we're going to need a lot of people.
That thought is gone. This is no different than in this in the last industrial re revolution. Somebody goes,
"Boy, that building really looks heavy."
Nobody says that. Nobody, wow, that mountain looks too big. Nobody says
that. Right.
Everything that's too big, too heavy, takes too long, those thought, those ideas are all gone.
You're reduced to creativity.
That's right.
What can you come up with?
Exactly. Which means now the question is how do you how do you work with these agents? Well, it's just a new way of
agents? Well, it's just a new way of doing computer programming. In the f in the past, we code. In the future, we're going we're going to write ideas, architectures specifications.
We're going to organize teams. We're going to give them we're going to help them define how to evaluate the definition of good versus bad. What's
the what does it look like when something is a great outcome? How to
iterate with you? How to brainstorm.
That's really what you're looking for.
And I'm I think that every engineer is going to have hundred hundred agents.
Back to the PR problem the industry has right now. You have executives uh like
right now. You have executives uh like David Freeberg with Ahalo who's looking at literally taking through the use of technology your technology and AI the
number of calories produced and making high quality cal calories what is the factor you think you can bring the cost down Freeberg and what impact does this
vision have for what you're doing zero shot genomic modeling and it works and you have that moment and you're like holy
honestly And and that's after people are replacing entire enterprise software stacks in a night. I did something in 90 minutes I was telling the guys about replaced a whole software stack and like
a whole bunch of workload 90 minutes on cloud ran this agentic system built the whole thing deployed it and we got we were on a Sunday night on a Sunday night 10 p.m. I was done at 11:30. I went to bed.
11:30. I went to bed.
As the CEO, you replaced Yeah. And everyone on my management team
Yeah. And everyone on my management team had to do a similar exercise over the weekend. What we saw on Monday, I was
weekend. What we saw on Monday, I was like, it's over. But the technical stuff, the science stuff, we did something in 30 minutes using auto research, and I'd love your view on auto research and what that tells us about
how far we still have to go in terms of efficiency. But using auto research and
efficiency. But using auto research and a chunk of data, something was published internally that we said, "Oh my god."
And that would normally be a PhD thesis that would take seven years. It would be one of the most celebrated PhD thesis we've ever seen in this field and it would be in the journal science and it was done in 30 minutes on a desktop computer running on auto research with
all the data we just ingested. We got it on Friday and we're like, "Hey, let's try it." Try booted up, went to GitHub,
try it." Try booted up, went to GitHub, downloaded Auto Research and ran it. And
you see everyone's face just go like and then the potential of what this is unlocking for us is like the kind of thing that would take seven years and it happened in 30 minutes and we're experiencing it in genomics and we're
like this is unbelievable. So I I think like the acceleration is widening the aperture for everyone in a way that like you didn't imagine a few years ago. But
just going back to the auto research point, can you just comment on what you think about the fact that this thing got published with 600 lines of code in a weekend and the capacity that it has to run locally and achieve what it can
achieve with all of these diverse data sets and what that tells us about the early stages we are in terms of optimization on algorithms and hardware.
The fundamental reason why Open Claw is so incredible number one is it's com its confluence its timing with the breakthroughs in large language model.
Yeah, its timing was perfect. It was
impeccable. Now, in a lot of ways, Peter wouldn't have come up with it probably if not for the fact that Claude and GPT and chat GPT have reached a level that is really very good,
right? It is also a new capability that
right? It is also a new capability that allows these models to tool use the tools that we've created over time web
browsers and Excel spreadsheets and you know in the case of chip design synopsis and cadence and uh omniverse and blender and autodesk and all of these tools are
going to continue to be used. There's
some some people say that that the enterprise IT software industry is going to get destroyed. There's it's there's a let me give you the alternative view.
The enterprise software industry is limited by butts and seats. It's about
to get a hundred times more agents banging on those tools. They're going to be agents banging on SQL. They're going
to be agents bang on vector databases, agents banging on Blender, agents bang on Photoshop. And the reason for that is
on Photoshop. And the reason for that is because those tools are first of all do a very good job. Second, those tools are the conduit between us in the final
analysis. When the work is done, it has
analysis. When the work is done, it has to be represented back to me in a way that I can control.
Right?
And I know how to control those tools.
And so I need everything to be put back into synopsis. I want everything to be
into synopsis. I want everything to be put back into cadence because that's how I control it. That's how I've ground truth.
Let me ask you a question about open source. So we have these closed source
source. So we have these closed source models. They're excellent.
models. They're excellent.
We have these openweight models. Many of
the Chinese models are incredible.
Absolutely incredible. Two days ago, you may not have seen this because you were busy on stage, but there was a training run that happened in this crypto project called Bit Tensor Subnet 3. They managed
to train a 4 billion parameter llama model totally distributed with a bunch of people contributing excess compute, but they were able to do it statefully and manage a training run,
which I thought was like a pretty crazy technical accomplishment.
Yeah. Because it's like random people and each person gets a little share.
Our our modern version of folding at home.
Exactly. So what what do you think about the end state of open source? Do you see this decentralization of architecture as well and decentralization of compute to
support open weights and a totally open- source approach to making sure AI is broadly available to everyone?
I believe we fundamentally need models as a firstass product proprietary product as well as models as open
source. These two things are not A or B.
source. These two things are not A or B.
It's A and B. There's no question about it. And the reason for that is because
it. And the reason for that is because models is a technology not a product.
Model is a technology not a service. For
the vast majority of consumers, the horizontal layer, the general intelligence, I would really really love not to go fine-tune my own. I would
really love to keep using chat GPT. I
love to use cloud. I love to use Gemini.
I love to you use X. And they all have their own personalities as you know, which is kind of depends on my mood and depends on what problem I'm trying to solve. you know I might you know do it
solve. you know I might you know do it on X or I might do it on on ChatGBT and so that that segment of the of the industry is thriving it's going to be
great however there all these industries their domain expertise their specialization has to be channeled has to be captured in a way that they can
control and that it can only come from open models the open model industry we're contributing tremendously to it is near the frontier and quite Quite frankly, even if it
reaches the frontier, I think that products as a service, worldclass products as as a models as a product is going to continue to thrive.
Every startup we're investing in now is open- source first and then going to the proprietary models.
Yeah. And the beautiful thing is because you have a great router you connect it to by on on first day every single day you're going to have access to the world's best model and and then it gives
you time to cost reduce and fine-tune and specialize and so you're going to have worldclass capabilities out to shoot every single time. Let
J can I ask question?
Nobody wants the US to win the global AI race more than you, right? But a year ago, the Biden era diffusion rule really was an anti- American diffusion of AI
around the world. So here we are a year into the new administration.
Give us a grade. Where is where are we in terms of global diffusion and the rate at which we're spreading US AI technology around the world? Are we an A? Are we a B? or we see what what's
A? Are we a B? or we see what what's working, what's not working.
Well, first of all, President Trump wants American industry to lead. He
wants American technology industry to lead. He wants American technology
lead. He wants American technology industry to win. He wants us to spread American technology around the world. He
wants United States to be the wealthiest country in the world. He wants all of that. At the current moment, as we
that. At the current moment, as we speak, Nvidia gave up a 95% market share in the second largest market in the world, and we're at 0%.
President Trump, [clears throat] That's right. President Trump wants us to get
right. President Trump wants us to get back in there. And and uh the first thing is uh to get licens licensed for the companies that we're going to be
able to sell to. We've got many companies who have requested for licenses. We've applied for licenses for
licenses. We've applied for licenses for them and we've got approved licenses from sec secretary lutnik. Uh now uh we've we informed the Chinese companies and many of them have given us purchase
orders and so we're going to we're going to we're in the process of cranking up our supply chain again to go ship. I
think at the highest level Brad um I think one of the things that we should acknowledge is this. Our national
security is diminished when we don't have access to miniature motors, rare earth minerals. It's diminished when we don't
minerals. It's diminished when we don't control our telecommunications networks.
It's diminished when we can't provide for sustainable energy for our country.
It is fundamentally diminished. Every
single one of these industries is an example of what I don't want the AI industry to be.
Right? When we look forward in time and we say what do we want? What is the what does it look like when American technology industry American AI industry
leads the world? We can all acknowledge that there is no way that AI models is one universally. It is we can all
one universally. It is we can all acknowledge that that is an outcome that makes no sense. However, we can all
imagine that the American tech stack from chips to computing systems to the platforms are used broadly by the world
where they build their own AI, they use public AI, they use private AI whatever and they can build their applications in their society. I would love that the
their society. I would love that the American tech stack is 90% of the world.
Yes, I would love that. The alternative
if it looks like solar, rare earth, magnets motors telecommunications I consider that a very bad outcome for national security.
Great.
Yeah.
How much are you monitoring the situation with the conflicts around the world right now? And how much does it worry you Jensen? So, China and Taiwan and then helium availability coming out of the Middle East, I understand, can be
a supply chain risk to semiconductor manufacturing. How much do these
manufacturing. How much do these situations worry you? How much are you spending on them?
Well, first of all, I think the in Middle East, I have we have 6,000 families there.
Yeah.
Uh we have a lot of Iranians uh at NVIDIA and their families are still in Iran and so so we have we have a lot of families there. The first thing is is
families there. The first thing is is they're quite anxious. They're quite
concerned, quite scared. Um we're
thinking about them all the time. Uh
we're monitoring and keeping an eye on them all the time. They have 100% of our support. Uh I've been asked several
support. Uh I've been asked several times, are we still considering uh being in Israel? We are 100% in Israel. We are
in Israel? We are 100% in Israel. We are
100% behind the families there. We are
100% in the Middle East. I was also asked, you know, given what's happening in the Middle East, uh is that an area where we believe that we can expand
artificial intelligence to? Um I believe that there's a reason we went to war and I believe at the end of the war, Middle East will be more stable than before.
And so if we were there, if we're considering it before, we should absolutely be considering it after. And
so I'm 100% in on that. With respect to with with with respect to to Taiwan, we have to do three things. One, we have to make sure that we re-industrialize
the United States as fast as we can.
And whether it's the chip manufacturing plants, the the computer manufacturing plants, or the AI factories, how are we doing on that? We're doing
excellent with by by gaining the strategic support by gaining the friendship of the supply chain of Taiwan.
By gaining their friendship by gaining their support, we were able to build Arizona and Texas, California at incredible rates. They're they are
incredible rates. They're they are genuinely a strategic partner. Um we we we really they deserve our support. They
deserve our friendship. They deserve our generosity and they're doing everything they can to accelerate the manufacturing process for us. And so, so I think that's number one. Number two, we ought
to diversify the manufacturing supply chain. And whether it's South Korea,
chain. And whether it's South Korea, whether it's it's Japan, it's Europe, we got to we got to diversify the supply chain, make it more resilient. And
number three, let's be let's let's demonstrate restraint. And while we're
demonstrate restraint. And while we're reducing uh increasing our diversity and resilience, let's not
press push um unnecessary. We need to be patient.
unnecessary. We need to be patient.
Is helium a problem?
A lot of reports, you know, I think helium could be a problem, but it's also the case that the supply chain probably has a lot of buffer in it.
These kind of things tend to have a lot of buffer. Uh but but um you know
of buffer. Uh but but um you know you've um made massive progress in self-driving. You made a big
self-driving. You made a big announcement. You've added many more
announcement. You've added many more partners including BYD. There was just a video of you driving around in a Mercedes and uh huge announcement uh with Uber that you're going to have a
number of cars on the road from many different manufacturers. your bet is I
different manufacturers. your bet is I believe that there's going to be an Android type open-source platform that you're going to play a major part in with
dozens of uh car providers and then maybe on the other side there could be an iOS with Tesla or Whimo. What's your
strategy thinking there and how that chessboard emerges because it feels like you have a a pretty deep stack and in some ways you're competing and in other
places you're collaborative. Yeah. Um,
it's taking a step back. We believe that everything that moves will be autonomous completely or partly someday. Number one. Number two, we
someday. Number one. Number two, we don't want to build self-driving cars, but we want to enable every car company in the world to build self-driving cars.
And so, we built all three computers, the training computer, the simulation computer, the valu evaluation computer, as well as the car computer. We develop
the world's safest driving operating system. Uh we also created the world's
system. Uh we also created the world's first reasoning autonomous vehicle so that it could decompose complicated scenarios into simpler scenarios that it
knows how to navigate through just like us reasoning systems. And so that reasoning system called Alpommyo has enabled us to achieve incredible results.
We open this we ver we vertical optimization. We horizontally innovate
optimization. We horizontally innovate and we let everybody decide. Do you want to buy one computer from us? In the case of Elon and Tesla, they buy our training computers. Um, do they want to buy our
computers. Um, do they want to buy our training computer and our simulation computers or do you want to let us uh work with us to do all three and even put the car computer in your car? So, we
you know, our attitude is we want to solve the problem.
We're not the solution provider and we're delighted however you work with us. Let me build on this question
with us. Let me build on this question because I think it's like it's so fascinating. You actually do create this
fascinating. You actually do create this platform. A thousand flowers are
platform. A thousand flowers are blooming.
But it's also true that some of those flowers want to now go back down in the stack and try to compete with you a little bit. Google has TPU, Amazon has
little bit. Google has TPU, Amazon has inferentia and tranium. You know,
everybody's sort of spinning up their own version of I think I can out Nvidia Nvidia even though they also tend to be huge customers.
How do you navigate that? And what do you think happens over time and where do those things play in the complexion of this kind of vision?
Yeah, really great. You know, first of all, um, we're the only AI company, we're an AI company. We build foundation models. We're at the frontier, many
models. We're at the frontier, many different domains. We build every single
different domains. We build every single every single layer, every single stack.
Um, we're the only AI company in the world that works with every AI company in the world. They never show me what they're building and I always show them exactly what I'm building.
Right.
Yeah. And so so the confidence comes from this one. Uh we are delighted to compete on what is the best technology and to the extent that to the extent
that we can continue to run fast I believe that buying from Nvidia still is one of the most economic things they could do and that's just incredible confidence there. Number one. Number two
confidence there. Number one. Number two
we're the only architecture that could be in every cloud and that gives us some fundamental advantages. where the only
fundamental advantages. where the only architecture you could take from a cloud and put into onrem in the car in any region in space.
That's right. In space. And so there's a whole whole part of our market about 40% of our of our business most people don't realize this 40% of our business unless you have the CUDA stack unless you can
build an entire AI factory you have the customers don't know what to do with you. They're not trying to build chips.
you. They're not trying to build chips.
They're not trying to buy chips. They're
trying to build AI infrastructure. And
so they want you to come in with the full stack. And we've got the whole
full stack. And we've got the whole stack. And so surprisingly, Nvidia is
stack. And so surprisingly, Nvidia is gaining market share. If you look at where we are today, we're gaining share.
Do you think what happens is these guys try and they realize, oh my god, it's too much. And then they come back. Is
too much. And then they come back. Is
that why the share grows?
Well, we're gaining share for several reasons. One, um, our velocity has gone.
reasons. One, um, our velocity has gone.
We help people realize it's not about building the chip, it's about building the system.
And that system is really hard to build.
uh and and so their their their business with us is increasing. In the case of AWS, I think they just announced, I think it was yesterday, that they're going to buy a a million chips uh in the
next couple years. I mean, that's a lot of chips from from AWS. And that's on top of all the chips they've already bought. And so, we're delighted to do
bought. And so, we're delighted to do that. But number one, we're gaining
that. But number one, we're gaining share this last couple years because we now have Anthropic coming to Nvidia.
Meta SL is coming to Nvidia. And the
growth of open models is incredible. And
that's all on Nvidia. And so we're growing in share because of the number of models. We're also growing in share
of models. We're also growing in share because out all of these companies are outside of the cloud and they're growing regionally in enterprise in industries
at the edge and that entire segment of growth is you know really hard to do if it's just building an asich Brad related to that um and not to get in the weeds on the numbers but analysts don't
seem to believe right so if you look at the consensus forecast you said compute could 1 millionx right and Yet they have you growing next year at 30%, the year
after that at 20%. And in 2029, which is supposed to be a monster year at 7%.
Right? So if you just if you take your TAM and you apply their growth numbers, it suggests that your share will plummet. Do you see anything in your
plummet. Do you see anything in your future order book that would make that correct?
Yeah. First of all, they just don't understand the scale and the breadth of AI.
Yes.
Yeah.
Yeah. I think that's true. Most people
think that AI is in the top five hyperscalers, right? That's right. There's also an
right? That's right. There's also an orthodoxy around these law of large numbers where, you know, they have to go back to their investment banking risk committee and show some model.
They're not going to believe in their minds that 5 trillion goes to 15 trillion. They're like go to it can go
trillion. They're like go to it can go to seven or they can have a 10 trillion company.
It's all just CIA stuff that I think it's never happened before. So you can't say it will and and because because you have to redefine what it is that you do. There
was somebody who made an observation recently that Nvidia Jensen, how can you be larger than Intel in servers and the reason for that is
because the CPU market of the entire data center was about $25 billion a year, right?
We do $25 billion a year as you guys know in a very in the time that we were sitting here.
And so obviously obviously [laughter] That was a joke.
No, it's but it's all in podcast.
Don't worry. Everything on this show is rough. Don't worry about it. It's all in
rough. Don't worry about it. It's all in here. Anyways, that was not guidance.
here. Anyways, that was not guidance.
But anyhow, anyhow, it the the point is how big you can be depends on what is it that you make, right?
Nvidia is not making chips. Number one,
making chips does not help you solve the AI infrastructure problem anymore. It's
too complicated. Number three, most people think that AI is narrowly in the things that they talk about and hear and see.
It's AI is much open AI is incredible.
They're going to be enormous. Anthropic
is incredible. They're going to be enormous. But AI is going to be much
enormous. But AI is going to be much much bigger than that.
And we addressed that segment.
Tell us about data centers in space for a second.
Yeah.
Um we're already in space. How should the layman think about what that business is versus when you hear about these big data center buildouts that's happening
in in on the ground?
Well, we should definitely work on the ground first because we're already here and number one. Number two, we should prepare to be out in space and obviously there's a lot of energy in space. Um the
challenge of course is that cooling you can't take advantage of conduction and convection and so you can only use radiation and radiation requires very large surfaces and so now that's not an
impossible thing to solve and there's a lot of lot of space in space. Um but
nonetheless the expense is still quite there is is there uh we're going to go explore it.
We're already there. We're already
radiation hardened. Uh we have we have uh uh uh CUDA in satellites around the world. Um they're doing imaging, image
world. Um they're doing imaging, image processing, AI imaging. And um and that kind of stuff ought to be done in space.
Instead of sending all the data back here and do imaging down here, we ought to just do imaging out in space. And so
there's a lot of things that we ought to done do do in space. And in the meantime, uh we're going to explore what is the architecture of data centers look like uh in space. And it'll take it'll take years. It's okay. We got I got
take years. It's okay. We got I got plenty of time. I wanted to um double click on healthcare. I know you've got a big effort there. We're all of a certain age where we're thinking about lifespan, health span. I mean, we all look great.
health span. I mean, we all look great.
I think some better than others.
I think some better than others. I don't
know what your secret is, Jensen.
Look pretty good these these I mean what's what are you taking what's off the menu? You got to talk to me when we're backstage. I want to know in the
we're backstage. I want to know in the green room what you got going on.
Squats and push-ups and sit-ups.
Perfect. Okay. Um but
that works. what you know in terms of the buildout in healthcare where is that going and what kind of progress are we making? I was just using
Claude to do some analysis and saying like where are all these billing codes?
We spend twice as much money in the US.
We get seem to get half as much. It
seemed like uh 15 to 25% of the dollar spent were on these first GP visits. And
I think we all know like chat GBT and a large language model does a better job more consistently today at a first visit. So what has to happen there to
visit. So what has to happen there to kind of break through all that regulation and have AI have a true impact on the health care system?
There's several several areas that we're involved in in um in healthcare. One is
uh AI uh physics uh and and that's or AI biology using AI to understand represent predict biology behavior biological
behavior and so that's one that's very important in drug discovery. There's
second which is AI agents and that's where the assistance and helping diagnosis and things like that. Open
evidence is a really good example.
Hypocratic is a really good example.
Love working with those companies. Um I
really think that this is an area uh where agentic technology is going to revolutionize how we interact with doctors and how do we interact for healthcare. The third part that we're in
healthcare. The third part that we're in involved in is physical AI. The first
one is AI physics using AI to predict physics. The second one is physical AI.
physics. The second one is physical AI.
AI that understand the properties of the laws of physics and that's used for a uh robotic surgery huge amounts of activities there. Every single
activities there. Every single instrument whether it's ultrasound or you know CT or whatever instrument we interact with in a hospital in the future will be agentic.
Yeah.
You know open claw in a safe version will be inside every single instrument.
And so in a lot of ways that instrument is going to be interacting with patients and nurses and doctors in a very unique way. so much investment in AI weapons.
way. so much investment in AI weapons.
[cough] It would be wonderful to see some investment in AI EMTs and paramedics and saving lives, not just taking them, which I think is a great segue into robotics. You've got dozens
of partners. We have this very weird
of partners. We have this very weird I I don't know want to call a lost decade or 20 years of Boston Dynamics.
Google bought a bunch of companies, they then wound up selling them and spinning them out where people just thought h robotics is just not ready for prime time. And now here we have the world's
time. And now here we have the world's greatest entrepreneur at this time. Uh
tied with you, uh Elon Musk doing well, that was a good save, I hope. Optimus,
uh pretty impressive. And then other companies in China. How how close is that to actually being in our lives where we might see a chef, a robotic
chef, a robotic nurse, a robotic housekeeper, you know, this humanoid factor actually working in the real world knowing what you know with those partners and the fidelity especially in
China where they seem to be doing as good a job as we're doing here or maybe better. Mhm.
better. Mhm.
Um, we invented the industry largely.
America invented we could, you could argue we got into it too soon.
Yeah.
And and we got exhausted. We got tired um about 5 years before the enabling technology appeared.
The brain.
Yeah. Yeah. And we we just got tired of it just a little too soon. Okay. That's
number one. But it's here now. Now the
question is how much longer from the point of high functioning existence proof high functioning existence existence proof to reasonable products
technology never takes more than a couple two three cycles and so a couple two three cycles basically be somewhere around three years to 5 years that's it three years to 5 years we're going to
have robots all over the place I think I think um uh China is is formidable and the reason for that is because their micro electronics, their uh motors,
their rare earth, their magnets, which is foundational to robotics, they are the world's best. And so, in a lot of ways, our robotics industry relies deeply on their ecosystem and their
supply chain. Um and uh and and they're,
supply chain. Um and uh and and they're, you know, obviously moving very quickly.
Uh we're going to, you know, our robotics industry will have to rely a lot on it. The world's robotics industry will have to rely on a lot on it. And so
so I think um uh you're going to see some fast fast movements here ultimately one for one. Elon seems to think we're going to have one robot for every human. 7 billion for 7 billion, 8
every human. 7 billion for 7 billion, 8 billion for 8 billion.
Well, I'm hoping more. Yeah, I'm hoping more. Yeah. Uh well, first of all,
more. Yeah. Uh well, first of all, there's a whole bunch of robots that are going to be in factories working around the clock. There's going to be a whole
the clock. There's going to be a whole bunch of faces that that don't move.
They move a little bit. Uh almost
everything will be robotic.
What does the world look like? Sorry,
let me say I think like this is one of the robotics for me is one of the pieces that I think unlocks uh economic mobility opportunities for every individual. Everyone now like when
individual. Everyone now like when everyone got a car, they could now go and do a lot of different jobs. When
everyone gets a robot, their robot can do a lot of work for them. They can
stand up an Etsy store, a Shopify store, they can create anything they want with their robot. They could do things that
their robot. They could do things that they independently cannot do. I think
the robot is going to end up being the greatest unlock for prosperity for more people on Earth than we've ever seen with any technology before.
Yeah, no doubt. I mean, just a simp the simple math at the moment is we're millions of people short in labor today.
Right. Yeah.
Right. We're we're we're actually really desperate in need of robotics and so that all of these companies could grow more if they had more labor. I mean,
we're we're number one. Some of the things that you mentioned are super fun.
I mean, because of robots, we'll have virtual presence. Uh, you know, I'll be
virtual presence. Uh, you know, I'll be able to go into the robot of my house and virtually operate it. I'm on a business trip, right?
Walk around the house.
Yeah. Walk the dog.
Yeah. Walk the dog.
Break the leaves.
Yeah. Exactly. [laughter] Freak out the dog.
Maybe not quite that, but just, you know, just, you know, wander around and just see what's going on in the house.
You know, chat with the dogs, chat with the kids. Yeah.
the kids. Yeah.
Yeah. And time travel is also we're going to be able to travel at the speed of light, you know, and so, you know, clearly we're going to send our robots ahead of us.
Yeah.
Not going to send myself. I'm going to send a robot, you know.
Check it out.
Yeah. Yeah. And then I'm going to upload my AI.
Well, it's inevitable. It unlocks the moon and it unlocks Mars as um targets for for colonization, which gives us infinite resources. Getting back from
infinite resources. Getting back from the moon is effectively zero energy cost to move material back because you can use solar and accelerate. So you could have factories that make everything the world needs on the moon and the robots
are going to be the unlock for enabling.
That's right. Distance no longer matters.
Distance doesn't matter. Yeah.
The more the more revenue we get out of models and agents, the more we can invest in building the infrastructure which then unlocks more capabilities on models and agents. Dario on Dwaresh's
podcast recently said by 2728 we'll have hundreds of billions of dollars of revenue out of the model companies and the agent companies. and he forecasts a trillion dollars by 2030, right? This is
non-infrastructure AI revenue. Um,
I think he I think he's he's being very conservative. I believe Dario and
conservative. I believe Dario and Anthropic is going to do way better than that.
Wow.
Way better than that.
Wow. So, from 30 billion to a trillion.
Yeah. and not and and the reason for that is the one part that he hasn't considered is that I believe every single enterprise software company will
also be a reseller value added reseller of anthropic code anthropics tokens value added reseller open AI that's right and they're going
to that that that part of their look at this logarithmic expansion yes their go to market is going to expand tremendously this year what do you think in that world is the
moat what's left over. I mean you have some moes that are frankly I think as this scales almost insurmountable. The
best one that nobody talks about is probably CUDA which is just like an incredible strategic advantage. But in
the future if a model can be used to create something incredible then the next spin of a model can be used to maybe disrupt it. Sort of in your mind what do you think for these companies that are building at that application
layer? What's their moat? like how do
layer? What's their moat? like how do they differentiate themselves?
Deep specialization.
Deep specialization. I believe that um these models they're going to have general general models that are connected into the software company's agentic system, right?
Many of those models are cloud models and proprietary models, but many of those models are specialized sub aents that they've trained on their own.
Right. All right. So, the call to arms for you for entrepreneurs is look, know your vertical.
That's right.
Know it as deep and as better than everybody else.
That's right.
And then wait for these tools because they're catching up to you and now you can imbue it with your knowledge.
That's right. The sooner you connect your agent, the sooner you connect your agent with customers, that flywheel is going to cause your agent to get it very much is an inversion of what we
do today. Because today we build a piece
do today. Because today we build a piece of software and we say what generalizes and then let's try to sell it as broadly as possible and then sell the customization around it and we in fact in fact exactly right we
we create a horizontal but notice there are all these gsis and all of these consultants who are specialists who then take your horizontal platform and specializes it into Exactly.
And that's arguably a five or six time bigger industry is the customization. It
is absolutely yeah the whole that very much is that's right so I think that these platform companies have an opportunity to become that specialist to become that vertical yeah domain expert
you know I just want to give you your flowers I think it was 3 years ago you said you're not going to lose your job to AI you're going to lose your job to somebody using AI and here we are the entire conversation has revolved around
this concept of agents making people superhuman and the business opportunity expanding and entrepreneurship expanding. You actually saw it pretty
expanding. You actually saw it pretty clearly. Yeah.
clearly. Yeah.
Have you changed your view?
I don't do No, I'm not Doomer. I do. I
do have Doomer. No, I you can hold space for I think two ideas. One is there are going to be a lot that's spiral Jake we call it.
No, [laughter] there you can.
But that's just because he doesn't hang out with me enough.
Well, we I mean we a little bit.
We don't talk. [laughter]
He will show YOUR BREAKFAST. HE'LL
FOLLOW YOU AROUND.
I'm not asking for it. He'll
follow you around. I'm not asking for it.
You can come with me and Tucker. We ski
in Japan every January. People love it.
Me and Tucker go road trip.
There is going to be job displacement.
And then the question becomes, you know, do those people have the fortitude, the resolve to then go embrace these, you know, technologies.
We're we're going to see 100% of driving go away by humans. That's just it's that's a beautiful thing in the lives saved. But we have to recognize that's
saved. But we have to recognize that's 15 million people in the United States, 10 to 15 million who are employed in that way. And and so that is gonna
that way. And and so that is gonna happen. Yes,
happen. Yes, I I think I think that jobs will change.
For example, um there are many chauffeers today uh who drives the car.
I believe that though many of those chauffeers will actually be in the car sitting behind the drive the steering wheel while the car is driving by itself. And the reason for that is
itself. And the reason for that is because remember what a chauffeur does in the end. These chauffeers, they're helping you, they're your assistants, they're helping you with your luggage.
They're helping you I mean they're helping you with a lot of things and and so I wouldn't be surprised actually if the chauffeers of the future becomes your mobility assistant and they are helping you do on a whole bunch of other
stuff to the hotel.
Yeah. And the car is driving by itself.
The autopilot in planes created a lot more pilots and didn't take any of the pilots out of the cockpit even though the autopilot is flying the plane 90% of the And by the way, while that car is driving itself, that chauffeur is going
to be doing a bunch of other work on his phone and he's going to be arranging, for example, coordinating a bunch of things for you, getting, you know, it's all the pie just grows in a way that
one of the things that that that yes, every job will be will be transformed.
Um, some jobs will be eliminated.
However, we also know that many many jobs will be recre will be created. The
one thing that I will say to young people who are coming out of school who are concerned, who are anxious about AI, be the expert of using AI
look we all want our employees to be expert at using AI and it's not not not trivial not trivial and so knowing
how to specify not to overprescribe leaving enough room for the AI to innovate and create while we guide it to the outcome we want. it. All of that
requires artistry.
You had you had this great advice to when you were at Stanford, I think it was, which is I wish to you pain and suffering. Do you remember that?
suffering. Do you remember that?
Yeah.
Fantastic.
What's your advice to young people around what they should be studying? So,
if they're sort of about to leave high school because now those are the kids that are at this like really native, they haven't made a decision about college, what to study, if at all go to
college. How do you guide those kids?
college. How do you guide those kids?
What would you tell them? I I still believe that deep science, deep math, um language skills, you know, as you know, language is the programming language of
AI, the ultimate program.
And so, as it turns out, it it could be that the English major could be the most successful. Yeah.
successful. Yeah.
And and so, so I think I think um I I would just advise whatever whatever education you get, just make sure that you're deeply deeply expert in using AIS. One of the things that I wanted to
AIS. One of the things that I wanted to say with respect to jobs and I want everybody to hear it that in fact at the beginning of the deep learning revolution, one of the the finest
computer scientists in the world deeply deeply I deeply uh deeply uh um respect uh predicted that computer vision will completely eliminate radiologists
and and that the one the one field he advises everybody to not go into is radiology. 10 years later, his
radiology. 10 years later, his prediction was at 100% right. Computer
vision has been integrated into all of the radiology technologies and radiology platforms in the world 100%. The
surprising outcome is the number of radiologists actually went up and the demand for radiologists is skyrocketed.
The reason for that is because everybody's job has a purpose and its task. The task
that you do is studying the scans, but your purpose is to diag helping the doctors, helping the patient diagnose disease.
And so what's surprising is because the scans are now being done so quickly, they could do more scans, improving healthcare.
Yes.
But doing more scans more quickly allows patients to be onboarded a lot more quick, treated a lot more quickly. And as it turns out, because hospitals enjoy making money, too.
Yeah.
Right.
They're doing more scans. They're
treating more customers.
The revenues go up. Guess what? Perfect.
And and a country that grows faster, productivity increases. A wealthier
productivity increases. A wealthier country can put more teachers in the classroom, not less teachers in the classroom. That's right. You just give
classroom. That's right. You just give every one of those teachers a personalized curriculum for every student in the room. It makes them all bionic and leads to a lot more. Every
single student will be assisted by AI, but every single student will need great teachers.
Yeah. Yeah. Amazing. Uh Jensen,
congratulations. I know your success and really this is an incredibly positive, uplifting discussion. We really
uplifting discussion. We really appreciate you taking the time for us.
He is the steward we need.
You are you are the more vocal. I'm
being very vocal about the positive side of it. I think there's too much dumerism
of it. I think there's too much dumerism is but I also think it takes a humility to have this level of success and be humble about we're making software guys.
Yeah.
And I think that that's actually really healthy for people to hear. We have done this before. We have invented categories
this before. We have invented categories and industries before.
We don't need to go to this scaremongering place. It does nothing.
scaremongering place. It does nothing.
And we get to choose, right? We have
autonomy and and agency. We get to pick how to we do deploy this. Okay, everybody. We'll see
deploy this. Okay, everybody. We'll see
you next time on the All-In interview.
Okay.
Well done, brother.
Thanks, man.
Good job.
Thank you, sir. That was awesome.
Good. Good.
Appreciate you. You guys are awesome.
Look at this. Look at this [clears throat] big crowd behind you guys, man.
Loading video analysis...