LongCut logo

No Priors Ep. 89 | With NVIDIA CEO Jensen Huang

By No Priors: AI, Machine Learning, Tech, & Startups

Summary

## Key takeaways - **Hyper Moore's Law Scaling**: Over the next 10 years, we hope to double or triple performance every year at data center scale, driving cost and energy down by a factor of 2-3 annually, compounding aggressively beyond traditional Moore's Law. [01:51], [02:23] - **Co-Design Drives New Scaling**: The new way of scaling involves co-design, modifying algorithms to reflect system architecture and vice versa, enabling shifts from FP64 to FP32 to BF16 to FP8, controlling both sides for massive gains. [03:07], [03:24] - **NVLink Enables Inference Scaling**: Inference requires low latency for self-reflection like chain-of-thought and tree search alongside high throughput; NVLink creates a virtual GPU with huge FLOPS, memory, and bandwidth to resolve this tension. [04:38], [05:44] - **CUDA Foundation Accelerates Iterations**: CUDA's solid foundation enabled 5x performance improvement on Hopper for Llama in one year without changing the upper layers or algorithms, impossible with traditional computing. [06:46], [07:08] - **Build Full Data Centers for Software**: We build complete data centers with every configuration at scale to ensure software works end-to-end, then disaggregate and sell parts, treating the data center as the new unit of computing. [11:06], [12:49] - **AI Factories Produce Intelligence**: New data centers are single-tenant AI factories producing tokens that reconstitute into intelligence, a trillion-dollar commodity industry every company and country will need. [25:13], [26:18]

Topics Covered

  • Hyper Moore's Law Scales AI 2-3X Yearly
  • Co-Design Replaces Dennard Scaling
  • NVLink Creates Virtual Mega-GPUs
  • AI Chip Designers Already Essential
  • Data Centers Become AI Factories

Full Transcript

hi listeners and welcome to no priors today we're here again one year since our last discussion with the one and only Jensen hang founder and CEO of Nvidia today nvidia's market cap is over

$3 trillion and it's the one literally holding all the chips in the AI Revolution we're excited to hang out in nvidia's headquarters and talk all things Frontier models and data center

scale Computing and the bets Nvidia is taking on it 10 year basis welcome back Jensen 30 years in to Nvidia and looking 10 years out what are the big bets you think are are still to make is it all

about scale up from here are we running into limitations in terms of how we can squeeze more compute memory out of the architectures we have what are you focused on well if we take a step back

and and think about what we've done we went from coding to machine learning from writing software tools to

creating a and all of that running on CPUs that was designed for human coding to now running

on gpus designed for um AI coding basically machine learning and so the the world has changed the way we do Computing the whole stack has changed and as a result the scale of the

problems we could address has changed a lot because we could if you could paralyze your software on one GPU you've set the foundations to paralyze across a whole cluster or maybe

across multiple clusters or multiple data centers and so I think we we've set ourselves up to be able to scale Computing uh at a level and develop software at a level that nobody's ever

imagined before and so we're at the beginning that um over the next 10 years uh our hope is that we could double or

triple performance every year at at scale not at chip at scale and to be able to therefore Drive the cost down by a factor of two or three Drive the

energy down by a factor of 2 three every single year when you do that every single year when you double or triple every year in just a few years it adds up so it compounds really really

aggressively and so I wouldn't be surprised if you know the way people think about Moors law which is a 2X every couple of years um you know we're

going to be on some kind of a hyper mors law curve and um I I fully hope that we continue to do that what do you think is a making that happen even faster than moraw because I know moraw was sort of self reflexive right it was something

that he said and then they people kind of implemented it to make it happen yeah the two fundamental um technical pillars one of them was dinard scaling and the other

one was Carver Mees vsi scaling and both of those techniques were rigorous techniques um but those those techniques have really run out of steam and and uh

so now we need a new wave doing scaling you know obviously the new way of doing scaling are are all kinds of things associated with codesign unless you can

modify or change the algorithm to reflect the architecture of the system or change and then change the system to reflect the architecture of the new software and go back and

forth unless you can control both sides of it you have no hope but if you can control both sides of it you can do

things like move from fp64 to fp32 to BF to fp8 to you know fp4 to who knows what right and so and so I think that that

codesign is a very big part of that the second part of it we call it full stack the second part of it is uh data center scale you know unless you could treat

the network as a compute Fabric and and uh push a lot of the work into the network push a lot of the the work into the fabric and as a result you

you're compressing you know doing compressing at very large scales and so that that that's the reason why we bought melanox and started fusing

infiniband and mvlink um in such an aggressive way and now look where MV link is going to go you know the the compute fabric is going

to going to um uh scale out uh what appears to be one incredible processor called a GPU now we're going hundreds of gpus that are going to be working together you know most most of these

these Computing challenges that we're dealing with now one of the the the most exciting ones of course is is uh inference time

scaling has to do with essentially uh generating tokens at incredibly low latency because you're self-reflecting as you as you just mentioned I mean

you're going to be you're going to be uh uh doing tree sege you're going to be doing Chain of Thought you're going to be doing probably some amount of simulation in your head you're going to be reflecting on your own answers

well you're going to be prompting yourself and generating text to your in you know silently um and still respond hopefully in a second well the only way to do that is

if your latency low your latency is extremely low meanwhile the data center is still about producing High throughput tokens because you know you still want

to keep cost down you want to keep the throughput high you want it right you know generate a return and so these two fundamental things about a factory low latency and high

throughput they're at odds with each other and so so in order for us to create something that is really great at in both um we have to go invent something new and mvy Link is really our

way of doing that we now you have now you have a virtual GPU that has incredible amount of flops because you need it for context you need a huge amount of memory working memory and

still have incredible bandwidth for token Generation all at the same time that's guess have all the people building the models actually also optimizing things pretty dramatically

like uh David on my team pull data over the last 18 months or so the cost of um a million tokens going into GPD 4 equivalent model has basically dropped

240x yeah and so there's just massive optimization and compression happening on that side as well just in our layer just in the layer that we work on you know one of the things that that

that we care a lot about of course is the ecosystem of our stack and the product activity of our software you know people forget that that because you have Cuda foundation and that's a solid

foundation everything above it can change M if everything if if the foundation's changing underneath you it's hard to build a building on top it's hard to create anything interesting on top and so so could have made it

possible for us to iterate so quickly just in the last year I think we just went back and benchmarked uh when llama first came out we've improved the performance of Hopper by a factor of

five without the algorithm without the layer on top ever changing now well a factor of five in one year is impossible using traditional Computing approaches

but accelerated Computing and using this way of of um code code code design we're able to invent all kinds of new things yeah how much uh are um you know your

biggest customers thinking about the uh interchangeability of their infrastructure between large scale training and uh inference well well you know

infrastructure is disaggregated these days Sam was just telling me that he he had decommissioned Volta just recently they pascals they have ampers all different configurations of black wall coming some of it is optimized for air

cool some of it's optimized to liquid coil your services are going to have to take advantage of all of this the advantage that Nvidia has of course is

that the the infrastructure that you built today for training um will just be wonderful for inference tomorrow and most of chat gbt I believe are inferenced on the same type of systems

that were trained on just recently and so if you can train on you can inference on it and so you're leaving you're leaving a trail of of infrastructure that you know is going to be incredibly

good at inference and and you have complete confidence that you can then take that return on on the investment that you've had and put it into a new infrastructure to go scale scale with

you know you're going to leave behind something of use and you know that that Nvidia and the rest of the ecosystem are going to working on improving the algorithm so that the rest of your infrastructure improves by a factor of

five you know in just a year and so that that motion will never never change and so the way that the way that people will think about the infrastructures yeah even though I built it for for training

today it's got to be great for training we know it's going to be great for inference um inference is going to be multi- scale I mean you're going to take first of all in order to distill smaller

model it's good to have a larger model that's still from and so so you're still going to create these incredible Frontier models they're going to be used for of course the the groundbreaking work you're going to use it for synthetic data generation you're going

to use the models big models to teach smaller models and distill down to smaller models and so there's a there's a whole bunch of uh different things you can do but in the end you're going to have giant models all the way down to

little tiny models the little tiny models are going to be quite effective you know not as generalizable but quite effective and so you know they're going to perform very specific Stu incredibly

well that one task and we're going to we're going to see superhuman task in one one little tiny domain from a little tiny TIN Tiny model maybe you know it's

not a small language model but you know tiny language model tlms or you know whatever yeah so so I think we're going to see all kinds of sizes and we hope is that right just kind of like software is

today yeah I think in a lot of ways artificial intelligence allows us to break new ground in in how easy it is to create new applications but everything about Computing has

largely remained the same for example uh the cost of maintaining software is extremely expensive and once you build it you would like it to run on a large of an installed Bas as possible you

would like not to write the same software twice I mean you know a lot of people still feel the same way you like to take your engineering and move them forward and and so to the extent that to

the extent that that the architecture allows you to on one hand um create software today that runs even been better tomorrow with new hardware that's great or software that you create tomorrow AI that you create tomorrow

runs on a large installed base you think that that's great that that way of thinking about software is not going to change Nvidia has moved into larger and larger let's say like unit of support

for customers I think about it going from single chip to you know server to Rack NV 72 how do you think about that progression like what what's next like should Nvidia do full data center in

fact we build full data centers the way that we build everything unless you're building if you're developing software you need the computer in its full

manifestation um we don't we don't build PowerPoint slides and ship the chips and we build a whole Data Center and until we get the whole Data Center built up how do you know the software

works until you get the whole Data Center built up how do you know your you know your fabric works and all the things that you expect it the efficiencies to be how do you know it's

going to really work scale and and that's the reason why that's the reason why it's not unusual to see somebody's actual performance be

dramatically lower than their Peak Performance as shown in PowerPoint slides and and and and it's Computing is just not used to it's not what it used

to be you know I say that the new unit of computing is the data center that's to us so that's what you have to deliver that's what we build now we build a whole thing like that and then we for everying single thing with every

combination uh air cooled x86 liquid cooled Grace ethernet infin band MV link no MV link you know what I'm saying we build every single configuration we have

five supercomputers in our company today next year we're going to build easily five more so if you're serious about software you build your own computers if you're serious about software then you're going to build your whole computer and we build it all at Goodale

this is the part that that is really interesting we build it at scale and we build it uh ver vertically integrated we optimize it um full stack and in and then we disaggregate everything and we

sell in in parts that's the part that is completely utterly remarkable about what we do the complexity of of that is just insane and the reason for that is we

want to be able to graft our infrastructure into gcp AWS Azure oci all of their control planes security planes are all different and all of the

way they think about their cluster sizing all different and um uh but yet we make it possible for them to all accommodate Nvidia architecture so that Cuda could be everywhere that's really

really in the end the the singular thought you know that we would like to have a Computing platform that developers could use that's largely consistent modulo you know 10% here and there because people's infrastructure

are slightly optimized differently and modulo 10% here and there but but everything they they build will run everywhere this is kind of the one of the principles of software that should

never be given up and and and we we we protect it quite dearly it makes it possible for our software Engineers to build once run everywhere and and that's

because we recognize uh that the investment of software is the most expensive investment and it's easy to test uh look at the size of the whole Hardware industry and then look at the size of the world's Industries it's A1

trillion dollar on top of this one trillion doll industry and that tells you something the software that you build you have to you know you basically maintain for as long as you shall live we we've never given up on piece of

software the reason why C is used is because you know I told everybody we will we will we will maintain this for as long as we shall live and we're serious now we still

maintain I just saw a review the other day Nvidia Shield our Android TV it's the best Android TV in the world we shipped it seven years ago it is still

the number one Android TV that that people you know anybody who who enjoys TV uh and we just updated the software just this last week and people a new story about it GeForce we have 300

million Gamers around the world we've never stranded a single one of them and so the fact that our architecture is compatible across all of these different areas makes it possible for us to do it

otherwise we would be we would be we would have you know we would have software teams that are 100 times the size of our company is today if not for this architectural compatibility so we're very serious about that and it

translates to benefits to C you know the developers one impressive substantiation of that recently was how quickly brought up a cluster for x. a yeah and if you want to talk about that because that that was striking in terms of both the

scale and the speed with which you did that you know a lot of that credit you got to give to Elon I think the um uh first of all to uh decide to do

something select the site um uh bring cooling to it uh Power and then and then uh decide to

build this 100,000 GPU super cluster which is you know the largest of its kind in in one unit uh and then working backwards you know I uh we started planning together

uh the date that he was going to stand everything up and the date that he was going to stand everything up was determined um you know quite a few

months ago and so all of the components all the OEM all the systems all the software integration we did with their team all the network simulation we simulate all the all the all the NK Network

configurations we pre I mean like we pre-staged everything as a digital twin we we pre uh we we uh pre-aged all of his supply chain uh we pre-staged all of

the wiring of the networking we even we even set up a small version of it kind of a you know just a first instance of it um you know ground truth if be

reference zero you know system zero uh before everything else showed up so by the time that everything showed up everything was staged uh all the practicing was done all the simulations

were done and then you know the massive integration even then the massive integration was a was a monument of you know

gargantuan teams of humanity crawling over each other wiring everything up 247 and and within a few weeks uh the Clusters were up I mean it's it's really

yeah it's really a testament to to his willpower and and um how he's able to think through mechanical things electrical things and and overcome what

is apparently you know extraordinary obstacles I mean what was done there is the first time that a a computer of that large scale has ever been done at that speed unless our two teams are working

from the networking team the computer team the software team the training team the you know and the infrastructure team the people that the electrical engineers to the you know to the software Engineers all working together yeah it's

really Quite a feat to watch was there a challenge that felt most um likely to be blocking from an engineering perspective just a tonnage of electronics that had to come together I mean it' probably be

worth just to measure it I mean it's uh you know it it it tons and tons of equipment it's just abnormal you know usually usually a supercomputer system

like that um you plan it for a couple of years uh from the moment that the first systems come on come delivered to the time that you probably submitted everything for some serious ious work

don't be surprised if it's a year you know I mean that happens all the time it's not abnormal now we we couldn't afford to do that so we we created you

know a few years ago there was an initiative in our company that's called Data Center as a product we don't sell it as a product but we have to treat it like it's a

product everything about planning for it and then standing it up optimizing it tuning it keep it operational the the goal is that it should be you know kind of like opening

up your beautiful new iPhone and you open it up and everything just kind of works now of course it's a miracle of Technology making it that like that but we now have the skills to do that and so if you're interested in a Data Center

and just have to give me a space and some power some cooling you know and uh we will'll help you set it up within call it 30 days I mean it's pretty extraordinary that's wild if you think

if you look ahead to 200,000 500,000 a million um in uh supercluster or whatever you call it at that point um what do you think is the biggest blocker

Capital energy Supply in one area everything nothing about what you just the scales that you talked about nothing is normal yeah but nothing is impossible

nothing is yeah no laws of physics limits um but everything is going to be hard and and of course you know is it worth it um like you can't believe you

know to to get to something that we would recognize as as a computer that that um so easily and so able to do what

we ask it to do with you know otherwise general intelligence of some kind uh and even you know even even if we could argue about is it really general

intelligence just getting close to it is going to be a miracle we know that and so I think the there there five or six Endeavors to try to get there right I

think um of course open Ai and and anthropic and x and uh you know of course Google and meta and U Microsoft

and um you know there there this this Frontier the next couple of clicks up that mountain are just so vital uh who doesn't want to be the

first on that on that on that mountain I think that the uh the prize uh for Reinventing uh intelligence alog together it's just it's it's too consequential not to attempt it and so I

think there are no laws of physics everything is going to be hard a year ago uh when we spoke together you talked about we asked like what applications you got most excited

about that Nvidia would serve next in Ai and otherwise and you talked about how you let your most extreme customers sort of lead you there yeah um and and about some of the scientific applications I

think that's become like much more uh mainstream of you over the last year uh is it still like science and ai's application is science it most excites

you I love the fact that we have digital we have ai chip designers here at Nvidia yeah I I love that we have ai software Engineers U how effective are AI chip designers today super good we can't we

couldn't build we couldn't build Hopper without it and the reason for that is because they could explore a much larger space than we can and uh because they have infinite time they're running on a supercomputer uh we have so little time

using using human Engineers that that um we don't explore as much of the space as we should and we also can't explore it combinatorially I can't explore my space

while including your exploration and your exploration and so you know art chips are so large it's not like it's designed as one ship it's designed almost like a thousand ships and we have

to we have to optimize each one of them kind of in isolation you really want to optimize a lot of them together and and

um you know cross module code design and and optimize across a much lar space obviously we're going to be able to find find you know local maximums that are

hidden behind local minimum somewhere and so so clearly we can find better answers um you can't do that without AI Engineers just simply can't do it we just don't have enough time one other

thing that's changed um since we last spoke collectively and I I looked it up um at the time nvidia's market cap was about 500 billion ion it's now over 3 trillion so the last 18 months you've

added 2 and a half trillion plus of market cap which effectively is1 billion plus a month or two and a half snowflakes or you know a stripe plus a little bit or however you want to think

about it a country or two a country or two um obviously a lot of things have stayed consistent in terms of focus on what you're building and Etc and you know walking through here earlier today

I felt the buzz like when I was at Google 15 years ago was kind of you felt the energy of the company and the vibe of excitement what has changed during that period if anything or how what what is different in terms of either how

Nidia functions or how you think about the world or the size of bets you can take or well our company can't change as fast as a stock price let's just be clear about

that so in a lot of ways we haven't changed that much I think the um the thing to do is to take a step back and ask ourselves what what what are we

doing I think that that's really the big you know the big observation realization Awakening for uh companies and countries is what's actually happening I think

what we're talking about earlier from our industry perspective we've reinvented Computing now it hasn't been reinvented for 60 years that's how big of a deal it

is that we've driven down the the marginal cost of computing down probably by a million x in the last 10 years to the point that we say hey let's just let the computer go exhaustively write the

software that's the big realization and that that in a lot of ways I was kind of say we were kind of saying the same thing about chip design we would love for the computer to

go discover something about our chips that we otherwise couldn't have done ourselves explore our chips and optimize it in a way that we couldn't do ourselves right in in the way that we would love for

digital biology or you know any other any other field of science and so I I think people are starting to realize one we reinvented reinvented on Computing but what does that mean even and as we

all of a sudden we created this thing called intelligence and and what happened to Computing well we went from data centers data centers are multi-tenant stores of files these new data centers we're creating are not data

centers they don't they're not multi-tenant they tend to be single tenant they're not storing any of our files they're just they're producing something they're producing tokens and these tokens are re reconstituted into

what appears to be intelligence isn't that right and intelligence all different kinds you know it could be articulation of robotic motion it could be um sequences of of amino acids it

could be you know chemical chains it could be all kinds of interesting things right so what are we really doing we've created a new instrument a new

Machinery that that in a lot of ways is the the noun of the adjective generative AI you know instead of generative AI not it's it's an AI Factory it it's a

factory that generates Ai and we're doing that at extremely large scale and and what people are starting to realize is you know maybe this is a new industry

it generates tokens it generates numbers but these numbers constitute in a way that is fairly valuable and and um what industry would benefit from it then

you take a step back and you ask yourself again you know what's going on in Nvidia on the one hand we reinvented Computing As We Know It And so there's a trillion dollars worth of infrastructure that needs to be

modernized that's is one layer of it the big layer of it is that there's this instrument that that we're building is not just for data centers which we we're

modernizing but you're using it for producing some new commodity and how big can this new commodity industry be hard to say but it's probably worth trillions

and so that I think is is kind of the if you were to take a step back you know we don't build computers anymore we build factories and every country is going to need it every company's going to need it you

know give me an example of a company who or industry that says you know what we don't need to produce intelligence we got plenty of it and so so that's the big idea I think you know and and that's

kind of an abstracted industrial View and you know someday someday people realize that in in a lot of ways the semiconductor industry wasn't about building chips it was build it was about

building the the foundational fabric for society and then all of a sudden everybody go ah I get it you know this is a big deal is not just about chips how do you think about embodiment

now well the thing I I I I'm super excited about is in a lot of ways we've we're close to artificial general intelligence but we're also close to

artificial general robotics tokens are tokens I mean the question is can you tokenize it you know of course token tokenizing things is not easy as you guys know but if you were able to

tokenize things um align it with large large language models and other modalities if I can generate a video that has Jensen reaching out to pick up

the coffee cup why can't I prompt a robot to generate the tokens they'll pick up the it you know and so intuitively you would think that the

problem statement is rather similar for a computer and and so I I think that we're that close that's incredibly exciting now the the two the two Brown

BR field uh robotic systems Brownfield meaning that you don't have to change the environment for is uh self-driving cars and and um with digital chauffeur

and embodied robots right between the cars and the human robot uh we we could literally um bring robotics to the world without changing the world because we built the world for those two things

probably not a coincidence that that elon's focused then those two forms of Robotics because it is likely to have the large potential scale and and so I I

think that that's exciting but the digital version of it is equally exciting you know we're talking about digital or AI employees there's no question we're going to have ai

employees of all kinds and our Outlook will be some biologics and some artificial intelligence and we all prompt them in the same way isn't that right mostly I prompt my employees you

know provide them context um ask him to perform a mission they go and uh recruit other team members uh they come back and you we're going back and forth how's

that going to be any different with digital and AI employees of all kinds so we're going to have ai marketing people AI CH designers AI supply chain people

AI you know and and I'm I'm hoping that Nvidia is someday um uh biologically bigger um but also uh from an artificial intelligence perspective much much

bigger that's our future company if we came back and talked to you a year from now what part of the company do you think be um most artificially intelligent I'm hoping it's Chip design

okay most important part and that's right because it because I should start I should have start where it moves the needle most also where we can make the biggest impact most you know it's such

an insanely hard problem I work with uh cine at at synopsis and rud at at Cadence um I totally imagine them having

synopsis chip designers that I can rent and they they know something about a particular module their their their tool and and they trained an AI to be incredibly good at it and we'll just

hire a whole bunch of them whenever we need we're in that phase of that ship design you know I might might rent a million synopsis Engineers to come and help me out and then go rent a million

Cadence Engineers to help me out and that what a what an exciting future for them that they have all these agents that that sit on top of their tools platform that use the tools platform and other and coll at with with other

platforms and you'll do that for you know Christian will do that at sap and Bill will do that at service now you know people people say that these SAS platforms are going to be disrupted I I actually think the opposite that they're

sitting on a gold mine that that they're going to be this flourishing of agents that are going to be specialized in Salesforce specialized in you know well Salesforce I think they call it

lightning and sap as a BAP and everybody's got their own language isn't that right and we got Cuda and we've got open us for Omniverse and and who's

going to create an AI agent that's awesome at open USD we are you know because nobody cares about them more than we do and and so so I I think in a lot of ways these platforms are going to

be flourishing with agents and we're going to introduce them to each other and they're going to collaborate solve problems you see a wealth of different people working in every domain in AI what do you think is um under noticed or

that people that you want more entrepreneurs or Engineers or business people could work on well first of all I think what what is misunderstood and and

and um I misunderstood maybe maybe underestimated is the the um under the under thewater activity

under the surface activity of uh groundbreaking science computer science to science and engineering that is being

affected by Ai and machine learning I think you just can't walk into a science department anywhere theoretical math department anywhere where Ai and

machine learning and the type of work that we're talking about today is going to transform tomorrow if they are if if you take all of the engineers in the world all of the scientists in the world

and you say that the way they're working today is early indication of the future because obviously it is MH then you're going to see a a tital

wave of J of AI a tital wave of AI a tital wave of machine learning change everything that we do in some short

period of time now remember I I saw the early indications of of computer vision and and the work with with um Alex and

ilen and Hinton at at in Toronto and um uh Yan Lan and and of course Andrew Ang here in Stanford and you know I saw the

early indications of it um and we were we were we were fortunate to have extrapolated from what was observed to

be detecting cats MH into a profound change in computer science in Computing all together and that extrapolation was fortunate for us and now of course we we

were we were uh so excited by so inspired by it that we changed everything about how we did things but that took how long it took uh literally

years from observing that toy Alex net which I think by today's standards will be considered a toy to superhuman levels of capabilities in object recognition

well that was only a few years uh what is happening right now the ground swell in all of the fields of science not one field of science left behind I mean just to be very clear okay everything from

Quantum Computing to quantum chemistry you know every field of science is involved in in the approaches that we're talking about if we give ourselves and they've been add it for a couple two three years if we give ourselves couple

two three years the world's going to change there's not going to be one paper there's not going to be one breakthrough in science one breakthrough in engineering where generative AI isn't at the foundation of it I'm fairly certain

of it now and so I I think I think um you know there's a lot of questions about you every so often I hear about whether this is a fad um uh computer you

you just got to go back to First principles and observe what is actually happening the Computing stack the way we do Computing has changed if the way you write software has changed I mean that

is pretty core mhm software is how humans encode knowledge this is how we encode our you know our algorithms we encode it in a very different way now that's going to affect everything

nothing else will ever be the same and so I I think the the uh I think I'm I'm talking to the converted here and and we all see the same thing and all the startups that that you know you guys you

guys work with and the scientists I work with and the engineers I work with nothing will be left behind I mean this we're going to take everybody with us I think one of the most exciting things coming from the computer science world

and looking at all these other fields of science is uh like I can go to a robotics conference now Material Science conference a biotech conference and like I'm like oh I understand this you know

not at every level of the science but in the driving of Discovery it is all the algorithms that are General and there's some Universal some Universal unifying Concepts M yeah yeah um and and

I I think that's like incredibly exciting when you see how effective it is in every domain y absolutely yeah and and uh I'm so excited that I'm using it myself every day you know I don't know

about you guys but it's my tutor now I mean I I I don't do I don't learn anything without first going to AI you

know why learn the hard way just just go directly to an I go directly to chat GPT or you know sometimes I do perplexity just depending on just the the the formulation of my questions and I just

start learning from there and then you can always Fork off and go deeper if you like um but but holy cow it's just incredible and and almost everything I know I I double check even though I know

it to be a fact you know what I consider to be ground truth I'm the expert I'll still go to Ai and check make double check yeah it's so great uh almost everything I do I involve it yeah I

think it's a great note to stop on thanks so much for time today yeah really enjoyed it nice to see you guys thanks Jensen find us on Twitter at no prior pod subscribe to our YouTube channel if

you want to see our faces follow the show on Apple podcasts Spotify or wherever you listen that way you get a new episode every week and sign up for emails or find transcripts for every

episode at no- pri.com

Loading...

Loading video analysis...