Sam Altman x Nikhil Kamath: How to Win When AI Changes Everything | People by WTF | Episode 13
By Nikhil Kamath
Summary
Topics Covered
- GPT5 Feels PhD-Level Fluent
- Master AI Tools Over Majors
- Project Humility Embrace Adaptation
- Build Defensible Layers atop AI
- India's AI Leapfrog Momentum
Full Transcript
[Music] [Applause]
[Music] uh where Nik
you told him 5 minutes like he has 2 minutes not yeah 2 minutes Everything looks good.
Just the monitor, the main monitor uh went off.
Everyone who's done can leave.
Hi fam.
Hey Nicole.
How are you?
I'm good. Sorry I'm late. I got caught up in getting ready for the launch tomorrow and lost track of time and excitement with the final results. But
no worries. I'm guessing it must be really hectic, right?
It is a very hectic day.
[Music] [Music] I have the model and I've been playing with it a little bit. How is it
different, Sam? I'm not an expert at
different, Sam? I'm not an expert at this. So yeah,
this. So yeah, there's all these ways we can talk about, you know, it's better at this metric or it's, you know, can do this amazing coding demo that the, you know,
GPT4 couldn't. But the thing that has
GPT4 couldn't. But the thing that has been most striking for me is in ways that are both big and small, going back from GPT5 to our previous generation
model is just so painful. It's just like worse at everything. And I've taken for granted that there is a fluency and a depth of intelligence with GPT5 that we
haven't had in any previous model. Um,
it's an integrated model, so you don't have to like pick in our model switcher and know if you should use GPT40 or 03 or 04 mini or any of the complicated things. It's just one thing that that
things. It's just one thing that that works. And it is like having PhD level
works. And it is like having PhD level experts in every field available to you 24/7 for whatever you need. not only to ask anything but also to do anything for
you. So if you, you know, need a piece
you. So if you, you know, need a piece of software created, it can kind of do it from scratch all at once. If you need a um if you need a research report on some complicated topic, it can do that for you. If you needed to, you know,
for you. If you needed to, you know, plan an event for you, it could do that, too.
Is it more agentic in nature in the sense that sequential tasking you're one step closer to it? Because I
was trying that.
It's much better at things like that.
the the sort of the robustness and reliability is greatly increased and that that's very helpful for agentic workflows. So I'm like very impressed by
workflows. So I'm like very impressed by how uh long and complex of a task it can carry out.
So we did a call a couple of weeks ago when I was bugging you about what sectors and themes to invest in for the next decade.
Uh so I don't want to talk about that too much. I thought we'll keep today
too much. I thought we'll keep today about first principles and how the world is changing by virtue of all that is changing in the world that you dominate.
So the very first thing I want to start with is if I were a 25year-old boy or girl living in Mumbai or
Bangalore in India. Uh I know you've said a bunch of times that colleges are not are not holding on to the place of relevance they might have had when I was
growing up. But what do I do now? A what
growing up. But what do I do now? A what
do I study? If I'm starting a company, what kind of company do I start?
Or if I were to even find a job, what industry do you think has some kind of tailwind? I'm not talking 10 years down
tailwind? I'm not talking 10 years down the line, but even as close as three to five years down the line.
First of all, uh I think this is probably the most exciting time to be starting out one's career. Uh maybe
ever. I I I think like that 25-year-old in Mumbai can probably do more than any previous 25-year-old in in history could. It's really amazing what you can
could. It's really amazing what you can do with a tool like this. Uh I felt the same way when I was 25 and you know the tools then were not as amazing as tools we have now but we had the computer
revolution and we could do things you know a 25-year-old then could do things that no 25-year-old in history before would have been able to and now that's
happening in a huge way. So whether you want to start a company or be a programmer or go work in kind of any other industry, you know, create new
media, what what whatever it is, um the ability for one person to use these tools and have great ideas and implement them with what would have taken, you
know, decades of experience or teams of people is really quite remarkable. Um,
in terms of particular industries, I am very excited about what AI is going to mean for science and the amount of science that one person will be able to discover and the rate at which they can do that. Clearly, it's transforming what
do that. Clearly, it's transforming what it means to program computers in a huge way and people will be able to create completely new kinds of software in a at huge new new scale. Uh definitely for
startups, you know, if you have that idea for a new business, the ability for a very tiny team to do a huge amount of work is is great. Um
but it feels like this is just now a very open canvas. You know, people people are limited to a degree that they've never been before only by the quality and creativity of their ideas
and you have these incredible tools to help you realize them.
Is there anything in particular you want to say about GPD5? then I can ask you questions around that.
GPT5 does does feel to us like it's going to be another big step forward in how people
uh use these systems. Um the the level of capability, the level of robustness, the reliability and the ability to use this just for a lot of tasks in life to
create software, to answer questions, to learn to work more efficiently. like this is a this is a you know pretty significant step forward. Each time we've had one of
step forward. Each time we've had one of these we have been amazed by the human potential it unlocks. Uh and in particular India is India is now our
second largest market in the world. Um
it may become our largest. We've taken a lot of feedback from users in India about what they'd like from us. better
support for languages, more affordable access, uh much more and we've been able to put that into this model and upgrades to Chat GPT. Um so we're committed to
continuing to work on that. But the, you know, every time we've had a major leap forward in how the capability we can bring to users, um we've been amazed by what those those 25-year-olds go off and
do in terms of creating new companies or, you know, learning better, getting better medical advice or whatever else.
Is there anything that I could build on top of GPT5 today as a 25-year-old in India that you think are lowhanging fruits per se that I should definitely look at?
I think you could build an entire startup way more efficiently than you ever could before. Now, this is obviously I'm biased because this is like near and dear to my heart, but the
fact that as a 25-year-old in India or anywhere else, maybe with a couple of friends, maybe just by yourself, you could use GPT5 to uh help you write the
software for a product much more efficiently, help you uh handle customer support, help you write marketing and communications plans, um help you review
legal documents, all of these things that would have taken a lot of people and a lot of expertise and you now have GPT5 to help you do all all of this. That that's
pretty amazing. If I could push you to be like a bit more specific and say I get science but uh what do I study?
Say I've been studying engineering or commerce or arts or something like that.
Is there any specific thing I study in order to use AI to develop something in science?
I think the most important specific thing to study is just getting really good at using the new AI tools.
Um I think learning is valuable for its own sake and learning to learn is this like meta skill that will will serve you throughout life and whether you're learning you know engineering like
computer engineering or biology or any other field like that if you get good at learning things you can learn new things quickly but fluency with the tools um is
you know is really important. when I was in college or in high school that it seemed to me that the obvious thing to do was learn to program. And I didn't know exactly what I was going to use
that for, but there was this new frontier that seemed very high leverage and important to get good at. And right
now, learning how to use AI tools is probably the most important specific hard skill to learn. And the difference between people who are really good at really AI native and think of everything
in terms of those tools and don't is huge. There's other general skills. Um,
huge. There's other general skills. Um,
learning kind of like to be adaptable and resilient, which I think is something really learnable. Uh, that
that's like quite valuable in a world that's changing so fast. Learning how to figure out what people want um is is really quite important. The the before
this, I used to be a startup investor and people would ask me what the most important thing for startup founders to figure out was.
Um my predecessor Paul Graham had this answer that has always stuck with me for people and it to give to founders and it it became the motto for Y cominator
which is make something people want and that sounds like such an easy instruction. I have watched so many
instruction. I have watched so many people try so hard to learn how to figure that out uh and fail and then I've watched many people work really hard to learn how to do that and get
great at it over a career. So, uh,
that's something as well. But, you know, in terms of the specifics, like are you supposed to take the biology class or the physics class, I don't think it matters right now.
And if I were to build on top of that, and when you say learn to adapt and change and learn AI tools faster, is there a path? I'm just looking for a
light that somebody could begin walking towards. How does one get better at the
towards. How does one get better at the AI tools that are available out there?
Well, one great thing to do, uh, GPT5 is quite good at helping to create small pieces of software very quickly and, uh,
much better than any model that I've used. And in the last
used. And in the last few weeks, uh, I have surprised myself how much I've used it to create a piece of software to solve some little problem
that I have in my life. And, you know, it's been an interesting creative process cuz I'll ask it for a first draft. I'll start using that and then
draft. I'll start using that and then I'll say hey with this feature it would be better or with this other thing I'd be able to do something differently um or you know I have started using it and realized that with my workflow I really
needed this and by putting more and more of the things that I have to do in this kind of a workflow uh that's been a very interesting way for me to learn how to use this.
You mentioned Paul Graham and I I was reading this letter or report he wrote in 2009 where he spoke about five
founders to watch out for and I think you were 19 at the time and he mentioned you along with people like Steve Jobs
and Larry and Sergey. Uh
why was that? You hadn't accomplished anything of note like they had. What did
Paul see you see in you? And what do you think is the innate skill set that sets you apart from your peers?
That was very nice of him. I remember
that. Uh I I certainly I remember at the time feeling that it was very deeply undeserved, but uh grateful that he he said that. Um
said that. Um I mean there's like a lot of people who are starting great companies. We we got lucky here in many ways. We've also
worked super hard. uh
may maybe the one thing that I would that I think we have done here well is to take a very long time horizon and think very independently. You know when we started it was it was like 4 and a
half years before we had a first product and it was very uncertain. We were just doing research trying to figure out how to get AI to work and we had very different ideas than the rest of of the
world. Um,
world. Um, but I think that ability to sort of trust our own conviction over a long time horizon without a lot of external feedback was was very helpful.
Is that a V or a I thing? Cuz I'm
speaking about when you were 19.
Oh, um, sorry, I thought you were asking about OpenAI. Uh, me when I was 19, I
about OpenAI. Uh, me when I was 19, I barely remember that. I don't know. I
was like a I was like a naive 19-year-old that didn't really know what he was doing. And uh
this is not false modesty. My my own self I think I've done impressive things now. But my own self-conception in 19
now. But my own self-conception in 19 was like deeply unure of myself and very unimpressive.
If the world were to be the world of tomorrow were to be a AI kingdom of sorts, you're definitely some kind of prince.
And uh I don't know if you follow Makavei uh but he used to say a very interesting thing.
He said that a prince should always appear appear but not be religious compassionate
trustworthy, humane and honest.
Uh, do you think this particular projection of I've heard you I've watched a lot of your interviews of late and I've heard you say repeatedly
that you're not formidable and use words like that. Uh, this projection of
like that. Uh, this projection of humility, is it apt for the world we live in or the world you're walking into?
I'm not sure it's apt for either one. But uh
one. But uh when I when I was 19, to go back to what you said, I I assumed that the people running these big tech companies really
had it all figured out and there were, you know, adults in the room and somebody had a plan and there were these people that were like way different than me that understood how to do things and, you know, had this like the companies
were like running very well and there was like not much drama and everything was like, you know, just everything was everything was being handled by the adults.
Um, and now that I'm supposed to be the adult in the room, I will tell you I think no one has a plan. No one really has it all working smoothly. Everyone,
or at least I am figuring out as they go. And
go. And you know, you have things you want to build and then things go wrong and you bet on the wrong people or the right people and you know, you get a technological breakthrough here or you
don't there. And you just keep putting
don't there. And you just keep putting one foot in front of another and you put your head down and work and you you know, you have these like tactics that some of them become great strategies,
some of them don't. uh you try something and the market reacts in some way or your competitors react in some way and you do something else and and the
and now my conception is everybody is kind of figuring it out as they go.
Everybody is like learning on the job and um I don't think that's like false humility. I think that's just the way
humility. I think that's just the way the world works and it's like a little it's a little strange to be on this side of it but that is what it seems like.
Uh I'm not even speaking so much about the authenticity of the humility or not but more again from the lens of somebody
starting to build for tomorrow because that's who we speak to. Is that a good image to project into the world? Does it
the image of humility really work today?
You mean should someone project that image?
Yeah.
Uh I mean I certainly have a very negative reaction to
people who are projecting certainty and confidence when uh they don't really know what's going to happen. And the reason is not just
to happen. And the reason is not just because it's annoying, which it is, but also because I think if you have that kind of a mindset, it is harder to have
the culture of intellectual openness and to sort of like listen to other viewpoints and make good decisions. Um,
a thing I say all the time to people is no one knows what happens next. And the
more you forget that and the more you're like, I am smarter than everybody else.
I have a master plan. I'm not going to pay attention to like what my users are saying or where the technology takes us or how the world is reacting, but you know, I know better and the world doesn't know as much. I think you just
make worse decisions. So, having that kind of open mindset um and the sort of like curiosity and willingness to adapt to new data and
change your mind, I think that's super important.
Like the number of times we have thought we knew something to get smacked in the face by reality here has been a lot and one of our strengths as a company is
when that happens we change what we're going to do where that that's been I think that's been really great and a big part of what's helped us succeed. So, so
maybe there are other ways to succeed and maybe, you know, projecting a ton of bravado into the world works, but the best founders that I have watched up
close throughout my career have all been more like the sort of quick learning and adapting style.
And you probably know more about this than most because of your role at Y Combinator.
I have a lot of data points on it at least. Yeah. Yeah. Uh
least. Yeah. Yeah. Uh
when we met in Washington a couple of days a couple of years ago at the White House, I remember when we were speaking and you went somewhere, I was speaking to your partner and you guys had a kid.
We did.
And how is that?
Uh it is my favorite thing ever. But I
mean I I know that like I have nothing that is not a cliche to say here. Um,
but it is the coolest, most amazing, most like emotionally overwhelming in the best ways and hard ways to uh experience every
everything said about how great it is, how intense it is, how it's like a kind of love you didn't know you could feel.
It's all it's all true. I have nothing to add other than I strongly recommend it and I think uh it's been really wonderful. It's amazing.
wonderful. It's amazing.
So I ponder on this a lot Sam uh kids, why people have kids and also questions like what happens to religion and
marriage tomorrow? Can I ask you why you
marriage tomorrow? Can I ask you why you had a kid? like family has always been an incredibly important thing to me and going like it just it felt like
it felt like the most and I didn't even know how much I underestimated what it was actually going to be like but it felt like the most important and meaningful and fulfilling thing I could
imagine doing and it has so far so early uh exceeded all expectations.
Do you think it's the biological need to procreate?
I don't know. There's like this seems like a thing that is so deep it's difficult to put into words. But
I feel confident like everyone I know uh looking back on their life who has had a great career and had a family has either said, you know, I'm so glad I took the
time to have kids. that was one of the most important things I've ever done, the best things I've ever done. Or
they've said that was by far the best thing I've ever done. That was way more important than any of the work I ever did. And uh I was like willing to take
did. And uh I was like willing to take the leap of faith that that would be true for me, too. And it certainly seems like it will be. It's like I Yeah, it it is. And if it is just a biological hack,
is. And if it is just a biological hack, I don't care. I'm so happy. Um, but it, you know, it like there's a sense of responsibility and kind of like family
is the word that keeps coming to mind that, uh, it's just really great.
The world seems to be having lesser kids. And
kids. And do you have an insight into the future of marriage, religion, and kids? Yeah, I
hope that creating family, creating community, uh, whatever you want to call it, will become far more important in a sort
of post AGI world. Um, I think it's a real problem for society that those things have been in retreat. Um, I think that
that just feels strictly bad to me. Uh I
have I don't I'm obviously not sure why that's been happening, but I hope we'll really reverse course on that. And in a world where people have more abundance,
more time, more sort of resources and potential and ability, uh I think it's like pretty clear that family and community are two of the
things that make us the happiest. And I
hope we will turn back to that.
As societies get more affluent, if one were to buy into the mimemetic desires of people, we all tend to want
what other people want, not necessarily what other people have.
If we all had more, do you think we would still want more if we all had enough?
I I do sort of think that human demand, desire, ability to play status games, whatever is is like it seems pretty
limitless. Um I don't think that's
limitless. Um I don't think that's necessarily bad. Uh
necessarily bad. Uh or not all bad. Um but yeah, I think we will figure out new things to want and new things to compete over.
Do you think the world retains largely the world retains the current model of capitalism and democracy in a way? Let me give you a scenario. What
way? Let me give you a scenario. What
happens if a company X let's say open AI gets to the point where it is 50% of world GDP.
Does society allow for that or I would bet not. I don't think that will happen. I think this will be a much more
happen. I think this will be a much more distributed thing. But if for some
distributed thing. But if for some reason that did happen, I think society would say, "We don't think so." Like,
"Let's figure out let's figure out something to do here." Um, my my the analogy I like most for AI is the transistor, which was this really
important scientific discovery that for a while looked like it was going to capture a ton of the value and then turned out to be something that just gets built into like tons of products and services. And you don't really think
and services. And you don't really think about, you know, the fact that you're using transistors all day long. It's
just kind of in everything and all these companies make incredible products and profits from it in this very distributed way. Um, so I I would guess that's what
way. Um, so I I would guess that's what happens and it's not like one company is ever half of global GDP. I at one point I did worry that that might happen, but I think that was a naive take,
right? But do you think the odds of the
right? But do you think the odds of the world moving towards socialism go up or if something gets that that large, will it get nationalized and we become more socialist?
I don't know if something will get nationalized. I don't know if the world
nationalized. I don't know if the world will like officially turn towards socialism, but I expect that I expect that like social support or
redistribution or whatever you want to call it will increase over time as society gets richer and as sort of the technological landscape shifts. I don't
know what way it's going to happen and I expect in different countries it'll happen in different ways like uh I think you'll see experimentation of like new kinds of sovereign wealth funds new kinds of like universal basic income
ideas redistribution of AI compute I don't know exactly what but I I suspect we'll see a lot of experimentation in society here on universal basic income I think
worldcoin was a very interesting experiment can you tell us a bit about what's happening there uh the idea was could we you know we have all this AI coming. We really want
to like care about humans as special.
Can we find a way to a privacy preserving way to identify unique humans and then create a new network and a new currency around that? Uh so it's a very
interesting experiment still early but growing quite fast. If AGI eliminates scarcity or scarcity by virtue of increasing productivity,
could one also assume that it would be deflationary in nature? Uh capital or money loses its ability to return
a rate of return and capital no longer remains a moat in the world of tomorrow.
I feel confused about this. Uh I think if you like look at the basic economic principles, it's supposed to be hugely deflationary.
Um and yet if the world decides that, you know, building out AI compute today is super
important to things tomorrow.
Um maybe something very strange happens with the economy and maybe capital is really really important because every piece of compute is is so valuable. Um
I was asking someone at dinner the other night if they thought that uh interest rates should be minus 2% or 25%. And he
kind of laughed. He's like well that's a ridiculous question. It has to be. And
ridiculous question. It has to be. And
then he stopped and said actually I'm not sure. So, I think it's like, yeah,
not sure. So, I think it's like, yeah, it should be deflationary eventually, but I could see it being weird in the short term.
That's actually a very interesting thing to say. Do you suspect it would be minus
to say. Do you suspect it would be minus 2%.
Uh eventually right, but I'm not sure. And maybe it's just like we're in this massive expansionary time where you're trying to like build the Dyson sphere in the solar system and you're borrowing money at
crazy rates to do that. And then there's more expansion beyond and more and more and I I don't know like I find it very hard to see more than a
few years in the future at this point.
The conversation we were having a couple of weeks ago u I was doing more research on the sectors you had suggested. Uh I
think we agreed on older and sicker world. Uh you also made a case for as
world. Uh you also made a case for as discretionary spend goes up, gateway luxury brands might do well.
Yeah.
What happens to them in a deflationary world? Cuz the value of these purchases
world? Cuz the value of these purchases go down.
Maybe not. I mean deflationary world some things can face huge deflationary pressure and others can be the sink for all of the extra capital.
So I'm actually not sure they do go down in a deflationary world. You know,
I would bet they go up actually.
Yeah, I think so. Uh because the extra the excess capital has to flow somewhere.
When you look at classical economic theory like a Adam Smith and stuff like that, uh the Austrian school always spoke about the marginal utility of
things. Uh if you have one kettle at
things. Uh if you have one kettle at home to make tea, it has X in value.
When you have two kettles, it still has some value. But when you have 20
some value. But when you have 20 kettles, it has no value. Do you think the world goes in that direction?
Yeah. So, 20 kettles doesn't help you.
Uh but even if you're only going to, you know spend 2 hours a day playing video games or whatever, and that amount of time is fixed, and so you don't need 20 hours
worth of video games. If that two hours of games gets better and more entertaining forever and ever and ever and just keeps getting more and more compelling, that still has value to you.
And and I think there are a lot of categories where um we will find that people can just get
much better stuff even if they don't necessarily get more of them. I think
we'll see this in a really big way.
Do you think there's a use case for the rappers that are getting built on these large models right now? Like I was in uh the US recently and I met Harvey for example.
What happens to a rapper like that does it get innovated out by the model itself at some point of time?
Some of them yes and some of them no. Um
and it's you know it's like sometimes you can obviously predict when one is going to go one way or the other and sometimes it really depends on the choices the the company makes down uh
down the line. the the the main thing I would say is using AI itself does not create a defensible business. You see this with
defensible business. You see this with every technology boom where people are like, well, I'm a startup doing X and because I'm using the latest technology trend, the normal rules of business
don't apply to me. And that's never true. You've always you've always got to
true. You've always you've always got to you've always got to like parlay that advantage that comes from using the new technology into a durable business with real value that gets created. And it's
kind of a race against the clock to do that. So,
that. So, you know, uh you can definitely build an amazing thing with AI, but then you have to go build a a real defensible layer around it. If I were to build a business
around it. If I were to build a business on top of your model, let me give you the example of Amazon for example. If I
sold a certain kind of t-shirt and I sold a lot and Amazon had all the data, eventually Amazon probably started a white labeled brand which was very
similar to mine and cannibalized my business almost. Should one worry that
business almost. Should one worry that will happen here as well because you're no longer just a model but you're foring into so many different businesses.
I I I would come back to that example of the transistor the you know we we are building this general purpose technology that you can
integrate into something in a lot of ways. Um
ways. Um but we keep following our equivalent of Moore's law and the the general capability keeps going up. If you build a business that gets better, the
business gets better when when the model gets better, then you should keep doing you should keep doing well if we continue to make progress. If you build
a business where when the model gets better, your business gets worse because the wrapper was too thin or whatever, then that's probably bad in the same way that it's been bad in other technology
revolutions.
So, there are clearly companies building on top of AI models that are creating huge value and very deep relationships with their customers for themselves. Um,
Curser is like a recent example of a company that is just exploding in popularity and I think really durable relationships with their with their customers. And then there's many others
customers. And then there's many others that don't and and that's kind of that's always the case. Uh, it does seem to me like
there are more companies getting created now than in previous technological revolutions that feel like they have a chance at real durability. Um,
maybe an example we could use to ground this is when the um when the iPhone first came out and the App Store first came out, the first set of apps were pretty light and a lot of them ended up
being features that kind of made it into future versions of iOS. You know, you could like sell a flashlight for a dollar that turned on the flash on your on your phone. And you made a lot of
dollars doing that, but it wasn't sticky cuz eventually Apple just added that into the operating system where it belonged. But if you started something
belonged. But if you started something that was like complicated and the iPhone was just an enabler for it, like Uber, that was a very valuable long-term thing to do. And in the early days, like the
to do. And in the early days, like the GBT3 days, I think you had a lot of kind of toy applications, as you should. Um,
many of which didn't need to be standalone companies or products. But
now, as the markets matured, you're really seeing some of these more durable businesses form. So if you lay emphasis
businesses form. So if you lay emphasis on owning the customer almost like the interface with the customer, would you say the relationship gets
deeper when I sell a service on top of your model versus a product? Cuz if it's a product company, the the exchange happens once. But if it's a service
happens once. But if it's a service company, it happens repeatedly and there is room for me to build in taste in that transaction which is repetitive in nature. Yeah, gen generally I agree with
nature. Yeah, gen generally I agree with that.
A small part of my world or a part of my world is creating content which I do once a month.
If a model to a large extent is able to factor in my vintage, my tenure and my evolution and uh throw out an output
which is predictive in nature with a fair degree of fair degree of uh efficiency.
Now if I behave in the same predictable manner tomorrow will be less valuable than me being contrarian. contrarian not
to the world but contrarian to my own behavior almost. So do you think the
behavior almost. So do you think the world inordinately favors contrarian behavior tomorrow?
Yeah, that that that's a good point. I
think so. I think uh the thing I'm thinking is how much will the models learn to do that? You you
know you want to be contrarian and right. Most of the time you're
right. Most of the time you're contrarian and you're contrarian and wrong and that's not that helpful. But
but yeah, I bet the ability to come up with the kind of contrarian right idea that the models today just can't do at all and maybe they'll get better at at some point. Um
point. Um that value of that should go up over time.
Getting good at doing things models can't do seems like an obvious increase in value.
Outside of being contrary, is there anything else that I could do that a model will take longer to learn? Look,
the models are going to be much smarter than we are, but there's a lot of things that people care about that have nothing to do with intelligence. Um, maybe maybe
there can be like an AI podcast host that is much better than you at asking interesting questions and, you know, kind of engaging whatever. And I
personally don't think that podcast host, that AI podcast host is likely to be more popular than you. People really
care about other humans. This is like very deep. People want to know a little
very deep. People want to know a little bit about your life story, what got you here. They want to be able to like talk
here. They want to be able to like talk to other people about this like shared sense of who you are. And there's like some cultural and social value in that.
We are obsessed with other people. Um
and why why is that Sam? Why do you think that is that you know like I think that's also like deep in in our biology. Um and uh
you know again earlier comment you don't fight things that are deep in biology.
Uh, I mean, I can I think it make ton of sense of why we we would evolve that way, but you know, here we are. So, you
know, we're going to keep caring about real people and even if the AI podcast host is much smarter than you, I think it's very unlikely he'll be more popular than you.
So, in a perverse way, being stupider will be more novel than being smart. I don't know if it's like
being smart. I don't know if it's like uh stupider or smarter that has the novelty, but I think being a real person in a world of unlimited AI content will
increase in value.
Is a real person somebody who screws up unlike a model.
I mean certainly real people do screw up. So maybe that's part of what we
up. So maybe that's part of what we associate with a real person. I'm not
sure. But I do think we we just knowing that it's a real person or not, we really extremely care about.
What is the difference between AGI and human intelligence today and tomorrow?
So, so like with GPT5, you have something that is incredibly smart in a lot of domains at tasks that take,
you know, seconds to a few minutes. It's
very superhuman at knowledge, at pattern recognition, at recall on these shorter term tasks. But in terms of figuring out
term tasks. But in terms of figuring out what questions to ask or to work on something over a very long period of time, uh we are definitely not close to
human performance.
And an interesting example that one of our researchers gave me recently is if you look at our performance in math
um you know a couple of years ago we could solve math problems that would take like an expert human a few minutes to solve. Recently we got gold level
to solve. Recently we got gold level performance on the international math olympiad. Each of those problems takes
olympiad. Each of those problems takes about an hour and a half. So we've gone from a problem a thinking horizon of a few minutes to an hour and a half. Uh to
prove a new an important new mathematical theorem maybe takes like a thousand hours and you can predict when we can do a thousand hour problem but certainly in the world today we cannot
at all and so that that's like another dimension where AI can't do it. So I was in the US between SF between San Francisco and New York the last couple
of months and I met a whole bunch of AI founders. Uh
founders. Uh the one thing everybody seemed to agree on is
like for AI the US is a few years ahead of most others.
They also thought that for robotics China seems to be ahead. Uh do you have a view on robotics and what happens there like humanoid or other form of robotics?
I think it will be incredibly important in a couple of years. I think uh one of the things that is going to feel most AGI like is seeing robots just walk by you on the street doing kind of normal
day-to-day tasks. Um
day-to-day tasks. Um is there a reason why they need to have humanlike form?
Well, you can certainly have non-humanoid forms, but the world is really built for humans, you know, like door handles, steering wheels and cars, factories,
like a lot of this is built for we we've built it for our own kind of morphology. So, there will of course be
morphology. So, there will of course be other specialized robots too, but um the world like is built and I hope stays built for for for us. So, you know,
robots that match that form factor. It
seems like a good idea.
If I'm a young guy looking to start a robotics company, but somebody else has manufacturing scale and I really want as a Indian guy to be able to build and
compete there. How do I make up for
compete there. How do I make up for manufacturing scale as someone starting up?
Well, eventually once you build enough robots, they can make more copies of themselves. And uh but in the short term
themselves. And uh but in the short term I think you probably have to find some really good partners that know a lot about manufacturing and you know we're we're interested in robots so we're thinking about this and it's definitely
uh it's definitely a new skill for us to learn.
Sam, what happens to the form factor?
I've been using the cell phone for a long time. I know you will not likely
long time. I know you will not likely speak about what you're doing with Johnny IV and what happens there or maybe you will but what happens to form
factor overall? I one one of the things
factor overall? I one one of the things that I think we'll be defining about the difference of AI versus the sort of previous
way we've been using computers and technology is you really want AI to be have as much context as possible do stuff for you and be proactive. So, a
computer or a phone, you know, it's kind of either on or off. It's in your pocket or it's in your hand and you're using it. But you might want AI to just be,
it. But you might want AI to just be, you know, like a companion with you throughout your day and alerting you in different ways when it can do something
to help you or when it's, you know, there's something really important you need to know or reminding you of something that you said you needed to do earlier in the day. And the current form
factors of computers are I think not quite right for that. They do have this either on or off binary. Uh that I think isn't quite what we want for like a the sort of sci-fi dream of the AI
companion. So the form factor that
companion. So the form factor that enables that you could imagine a lot of things. Um there's people talking about
things. Um there's people talking about you know glasses and wearables and little you know things that sit on your table and uh I think the world will experiment with a
lot of those. But this idea of sort of ambiently aware physical hardware that feels like it's going to be important.
Is that the form factor with Johnny Ive like a sensor?
Uh so we'll try multiple products. Uh
but but I think this idea of trying to build hardware that an AI companion can sort of embody itself in will be an important thread.
Right. The the last two things I want to ask you Sam is one about fusion because I know you've invested in helion and you're a big proponent of it.
Does that solve the climate change problem?
Would you put money on that?
Well, it certainly helps a lot. I I
suspect that we've already done sufficient damage to the climate. We're
going to have to undo some damage, too, even if we got to switch to fusion right away. But it would certainly be a great
away. But it would certainly be a great step forward.
And the last question I have for you Sam is the question I care most about.
What's in this AI realm for India as a country? What's the opportunity for us?
country? What's the opportunity for us?
As I mentioned earlier, I think India may be our largest market in the world uh at some point in the not very distant future. the excitement,
future. the excitement, the embrace of AI in India and the ability to for Indian people to use AI
to just sort of leaprog into the future and uh invent a totally new and better way of doing things and the sort of the economic benefits that come from that the societal benefits if if there is one
large society in the world that seems most enthusiastic to transform with AI right now it's India and the energy is incredible I'm looking forward to coming
soon Um, and it's really like quite amazing to watch. Uh, and I I think the it's sort of the momentum is unmatched anywhere in the world.
I I feel like the question really is how do we transition from being from being a consumer to be a producer where we can build something that other people use outside of India.
That that was what I I mean there's lots of things happening there. Good. But
that that was the thing I meant. I think
that's really happening already in a big way. Um the the entrepreneurial energy
way. Um the the entrepreneurial energy around building with AI in India is is quite amazing and we hope to see much more of it.
Right. Yeah. Super. Thank you Sam for doing this.
Thank you for having me.
Thank you.
Appreciate it.
I'm going to message you.
Okay. Good to see you. Thanks for doing this.
Loading video analysis...