Sam Altman Unfiltered: ChatGPT, AI Risks & What’s Coming Next, 40 Questions in 60 Minutes
By The Indian Express
Summary
Topics Covered
- AI Leaps to Research-Level Math
- India Shifts from AI Consumer to Builder
- AI Disrupts Jobs but Creates More
- Democratize AI to Distribute Power
Full Transcript
Sam, just looking at you, I feel like the the chaos of India is getting to you. It's day two.
you. It's day two.
>> It's been a busy two days.
>> Very warm welcome to Express Ada. Um
guys, a very special day for us. Uh
Forbes magazine calls OpenAI one of the most important companies in the world.
And the man who is if we look back uh at today 20 years from now we'll probably uh it I think I'll just wait for the
audience to settle down a bit.
Okay.
Okay. Are we settled? Yeah.
If we look back uh at today, 20 years from now, I don't think it'll be an overstatement to say that the gentleman in front of me today is probably that one person who has made a tectonic
shift. It's if anything close to the
shift. It's if anything close to the industrial revolution, uh it probably is the the gentleman in front of me right now who it all probably started with. Is
that is that a fair statement to make, Sam?
>> No. Um
you know, I think this is generally true for many companies. the person running the company gets way too much credit relative to the work everybody else does. But in our case in particular,
does. But in our case in particular, this has really been a story of a scientific discovery. There have been a
scientific discovery. There have been a handful of researchers that did a miracle and figured out uh a very deep
thing about the way the world works. And
the thing that is so special about deep learning um is what has led it to be so general which is that we figured out we this small set of researchers figured
out this algorithm that could learn anything and at scale it gets better and better and then a whole set of people have worked very hard to deliver that
scale on these models and then the whole world has figured out how to build incredible products and services around that um but more than anything this is a story of the scientists.
>> Yeah. And when you've uh the last time you were in India to now lot has changed in the world but what do you think has changed in India with respect to AI?
>> Is it just a little over a year ago that I was here uh and before talking about what's changed in India I think it's worth talking about what's changed in AI because this will ground this. Um a year
ago AI was just able to do high school math.
And this was incredible. Uh AI could do a you know a very good not an excellent but a very good job at high school mathematics and people couldn't believe it and I was in India. I was speaking an
event like this and I was talking about it and people were it had just happened people genuinely felt wonder about it.
They're like you know this can do what an 11th grader can do. This is amazing cuz only a couple years before that you know AI couldn't really like do any math at all. I couldn't do grade school math.
at all. I couldn't do grade school math.
It was struggling with that.
By last summer, uh it was competing at the hardest mathematics competitions we had in the in the world and doing okay. And last week uh there
was a new thing called first proof where mathematicians put out 10 research problems uh that were the solutions to which were not known and took the our best our best mathematicians the cutting
edge of research you know new knowledge to figure out because it hadn't yet been shared and I believe our latest AI got seven of those problems right so AI has
gone from doing okay at high school math to being able to do new research level mathematics figure out new knowledge also happening in physics. And this is an amazing change in a year. Um we've
gone from AI that could do what we expected a very smart high school student to do to pushing the edge of human knowledge. Um the other so that's
human knowledge. Um the other so that's change number one. Uh the other change is that AI has gone from being able to write a little bit of programming to
completely changing what it means to be a computer programmer. Uh last time I was here, people were amazed at these sort of autocomplete tools for code. And
now if you type in an idea one year later if you type in an idea in something like codeex you can have an entire application created. Um it the
job of a programmer is now I think in this last year changed more than it's changed in any year that I've been an adult. Um
adult. Um codeex is our f uh India is our fastest growing market for codec. So especially
what's happening here to get to the next part of the question is really amazing.
Um the biggest change that I feel from last time I was here is when I was here last time it felt like India was a consumer of AI. People were using these services,
of AI. People were using these services, people were doing other stuff and now uh the builder energy in India is off the charts >> and and and and you in fact said
something a little controversial last time. Now it's controversial because at
time. Now it's controversial because at that time you said you know you had painted a picture of despair that you know a $10 million fund would not be enough to uh you know to build an LLM a
meaningful LLM uh out of India and now yesterday we're talking about a full stack coming out of India and India can actually lead an AI. Yeah.
>> Uh that comment I mean that got taken out of context. That was a few years ago. But uh the the comment was not that
ago. But uh the the comment was not that India couldn't do an amazing job with a $10 million company and a $10 million model. Uh I think that clearly is the
model. Uh I think that clearly is the case. It was to could you make a very
case. It was to could you make a very frontier model for $10 million? Um I
think you couldn't then. I think it's even more true now that you can't. Um
>> it's even it's even more true now that you can't >> make a frontier model for 10 million.
Yeah that's >> they've gotten very expensive. Um,
>> but uh if India want with enough funding for sure an Indian company could do that and the smaller narrow models that India is doing are incredible.
>> So is that is that surprising to you? Is
there is there anything that's actually surprised you between your last trip and this trip about India and AI that you didn't expect in terms of >> um that stat that I mentioned a couple
of minutes ago that this is the fastest growing market for codecs in the world.
Uh I I'm very happy. I guess I shouldn't be surprised but it's it's great to see. I
was at IIT Delhi this morning and the energy from people building this new generation of startups. Um that is a place where I expect India to surprise the world on the upside >> and and the obviously the other side of
that coin is the disruption in the jobs especially in the IT IT enterprise service sector about 8% of our GDP comes from that one space. Um and a lot has
been said about this in the last 10 days uh because of codeex before Codex it was another claude and it just showed how fast software has kind of just moved out. I know a lot has been said about in
out. I know a lot has been said about in the last week but I still want you to reflect on that how much of that is a threat for for India and how should India think about that.
>> I do think that is going to change a lot and there's going to be a big impact there. Uh and I think it is never
there. Uh and I think it is never helpful to pretend otherwise. However,
the what I expect to happen um as has happened with every other step forward in computer programming is people will operate at a higher level of abstraction. They will be able to do
abstraction. They will be able to do more people. There will still be a need
more people. There will still be a need for it broadly. Um the amount of product and
it broadly. Um the amount of product and code that's produced will increase a lot. The expectations will go up. But I
lot. The expectations will go up. But I
think as long as the country and the companies that do this adapt quickly to this new world, there will be plenty of new things to do. I'm I'm not a jobs doomer in the sense that I don't think
there will be jobs for people in the future. I think there will be a lot of
future. I think there will be a lot of jobs. Every technological revolution has
jobs. Every technological revolution has panicked about jobs going away and every technological revolution has found new jobs on the other side. The promised
leisure has never happened. you know,
this idea that we all get to go on permanent vacation, that's never quite worked out. I don't think it will. Um, I
worked out. I don't think it will. Um, I
hope we get more in that direction, but our desire to be useful to each other, our desire for more and better stuff, our desire to express our creativity, we never kind of reach the end of that. I
think there is no natural limit to that.
There's a whole big universe out there to go explore if we can figure out, you know, space travel or whatever. I don't
know where it's going to go. Um but
but I do think clearly things are going to evolve fast and you have to be on the right side of that and you know that that that statement about jobs is a very lotus statement in India because you
know 500 million people under the age of 30. Uh
30. Uh you understand the gravity of that uh of of jobs and that and that that that anxiety that that India has. Do you
discuss this when you when you meet the prime minister, when you meet people who matter in in policy? How much of that comes up and what are the what are the answers you have to that anxiety?
>> Yeah, the the main thing that comes up with polit the main things that come up with political leaders are infrastructure, jobs,
sort of fair distribution of benefits and safety broadly speaking. Um and
those are the proportion of those is different in different countries and also the the level of focus on the upsides versus the potential downsides is different in different countries.
>> Um >> we talk about jobs impact a lot. Uh we
one version of this that comes up in it feels like almost every conversation I have socially is what should my kids study for the future? what should make >> my what what's it going to look like?
Like this is this is a version of asking that question. You know, where where is
that question. You know, where where is the economy going to go?
>> Um it's really hard to answer that specifically if you study history. One
one thing I love to do is to read about the history of technology and if you look at say the primary source material of people that were experiencing the industrial revolution, there was a lot
of panic about jobs. There were a lot of predictions about what the jobs were going to be. A lot of fear there was going to be no jobs and a lot of prediction about what the new jobs would be. They were sort of shockingly wrong.
be. They were sort of shockingly wrong.
And none of them were like, I'm going to be the CEO of an AI company. Um
certainly none of them were like, I'm going to be a YouTube influencer. Uh so
I I think this is very hard to predict, but the skills that will work no matter what. Yeah,
what. Yeah, >> fluency with AI tools, resilience, adaptability, figuring out what you know people are going to want and how to be useful to them. Uh how to work with
other people. These these are all very
other people. These these are all very good things to learn. And uh
yeah, I think it's useless to pretend there won't be a big change. The change
won't be as fast as some people in the AI industry predict because society's always got more inertia and it always takes longer. But eventually the change
takes longer. But eventually the change will be huge and we'll find all sorts of new things to do.
I want to come back to that a little later about where that resistance you think will come from that inertia of society does it come more from the developed economies or from the from the underdeveloped economies but I before
that a little bit more about India and AI that five layer cake that Jensen spoke about the whole AI stack um the energy the data centers the chips the
models and the applications that entire five layer cake which areas do you think India has the ability and the right to win in and which areas do you think it's just a very risky bet for India to to
kind of make >> I I think India should you know whether you subscribe to it's a five layer cake or a seven layer cake people define it a little bit
differently um but I think India should play at all of those levels uh I think that there are there are important advantages to
vertical integration and being able to work across all those levels for an economy of the size and importance of India's is a good thing to do and uh yeah I talked with the prime minister
just before coming here he is certainly motivated to uh to play at all of those levels and I think it's important >> and if we do play at all those levels and you know you're talking about more than 70% of the population already on
the internet I don't know what percentage of that you you estimate is on is doing artificial intelligence but assuming we want a lot of that to move into the AI domain and that's a big change last time you were here a lot of
fear and anxiety about AI Now we're all about let's embrace it and let's kind of run with it. Um does the world have the computing power it needs for India to
become an AI first society?
>> Not yet. But the world is rapidly going to have to work together to solve that problem. Um a question I'd love to ask
problem. Um a question I'd love to ask people is how many GPUs would you like working for you all the time? How many
GPUs would you personally like thinking about your problems, helping you with your work, coming, you know, implementing your ideas, writing the software you want, operating the, you know, humanoid robots you'll have in your home and garden and everything else
and, you know, building new houses for you, whatever. Like, how how many do you
you, whatever. Like, how how many do you want? And people give different answers.
want? And people give different answers.
No one ever says less than one.
Some people say a thousand. I would like a thousand. Um,
a thousand. Um, if you go multiply that out and say there's 8 billion people in the world, we have no way to deliver 8 trillion
GPUs anytime soon. I mean, that just seems ridiculous. Yeah,
seems ridiculous. Yeah, >> we're not going to do that to be clear.
We're not going to do that on Earth at least.
>> Um, but I do think it shows the as it points to the level of ambition we need to have about how much compute capacity we're
going to build out. And this will be the most expensive, complex infrastructure project the world has ever collectively taken on for all of
our welfare. Um, so we're going to need
our welfare. Um, so we're going to need a lot of compute capacity. The good
news, it would be impossible to build this out the oldfashioned way, but we'll have AI and robots helping us do this.
And, you know, we'll figure it out.
>> And and is that why space keeps coming up nowadays? There's another space race,
up nowadays? There's another space race, but that I keep hearing about. I
honestly think the idea with the current landscape of putting data centers in space is ridiculous.
It will make sense someday, [applause] but if if you just do like the very rough math of launch costs relative to the cost of power we can do on Earth to
say nothing of how you're going to fix a broken GPU in space, and they do break a lot still, unfortunately. Um, we are not there yet. There will come a time space
there yet. There will come a time space is great for a lot of things. Orbital
data centers are not something that's going to matter at scale this decade.
>> But then all this infrastructure buildout, Sam, it feels like it kind of forces you organizations like yours to have a very intimate relationship with government. And is that something that's
government. And is that something that's kind of changing with AI? This whole
power structure that you know now the at one point big tech was a very a very privatized force uh with a very complicated relationship with government. Now it seems to be that you
government. Now it seems to be that you need to have a good relationship with government whether it's Indian companies with Indian government or companies like yours with the with the US government.
Is the government a big enabler?
>> Yeah, I think it's going to be really important not not just for building out infrastructure but just given the the level of impact this is going to have on society and the need to truly democratize this technology. Governments
are going to have to be involved and companies like ours are going to have to partner with governments.
>> And how was that? I mean, have you ever thought of paused and thought about that evolution of that relationship between big tech and uh and government? By the
way, can I call you big tech or not really?
>> I don't think most I don't feel like big tech.
>> Okay.
>> I don't think we're there yet.
>> Okay. All right. But tech companies like yours and government, I mean, >> um, I I I
think the tech industry started out as this extremely libertarian, you know, we don't need the government, the government doesn't need us sort of view. That has changed a lot.
Even before AI, like the last couple of decades as the companies got bigger and more central to the economy and the way the whole world works, that's changed significantly. Um,
significantly. Um, but maybe never before has it been this important just given the scale of the infrastructure that needs to happen.
>> And how do you feel about that? Because
we, you know, one of the theories, one of the things we keep hearing is that this White House is very close to Silicon Valley. Silicon Valley was very
Silicon Valley. Silicon Valley was very close sponsor of JD Vance. That whole
relationship is a very intimate one. Has
is that is that true? Is that good? Is
that bad? I would say close in some ways and not not close in others. You know,
there there are some tight ties and then also this administration has had some big criticisms of tech. Uh and
is that good or bad? Uh I think a close cooperation between tech companies and the government is going to become increasingly important over time. Um it
obviously won't be a perfectly smooth relationship but the better it can be I think the better for all of us.
>> And and what about governmenttogovernment relationships?
You know what what got announced today was um was very interesting partnership of several countries who are thinking about AI in a very advanced way. Is it
the all these countries versus China? Is
that what's kind of playing out or is it are we all kind of in a race and somewhere China is also in that race?
>> I suspect it'll turn out to be way more complex than that. I I think it will shift over time. Uh there
I do sort of think that AI will become one of the most important political issues in the world. one of the highest order bits of geopolitical
tension and cooperation.
But I I don't think it'll be a fixed thing. I think as it develops and as I
thing. I think as it develops and as I mean this has always happened that sort of political alliances shift over time.
I expect them to shift more in the future uh not less as things move faster.
>> But what is how do you think about China in all of this? I mean, you know, we keep hearing
this? I mean, you know, we keep hearing that China's leap years ahead and China's got all this energy and China's got all this renewable energy and China's got all this robotics and all this technology. Uh, are we hearing the
this technology. Uh, are we hearing the right thing and why are they leap years ahead? What have they done?
ahead? What have they done?
>> I would say very ahead in some areas and not ahead in others. I I think this is like I mean that sounds like a boring statement that's probably always [laughter] >> always true in the world like in terms of manufacturing physical robots clearly
ahead >> a big edge on things like electric motors magnets stuff like that um clearly ahead on energy buildout
uh so there are places where China is significantly ahead of other countries and then there's places where I think we're ahead of them and my guess is that that's sort of always what it's been like and what it
will continue to be like. It's hard to be ahead on everything or behind on everything. Maybe if you have like the
everything. Maybe if you have like the only super intelligence in the world, you could do it. I think that would actually be bad for other reasons.
>> But is it a race, Sam? I mean, it it it we keep framing it as a race.
>> Sort of. Uh
>> is it a collaborative journey or is it a race or >> Yeah, I mean both. That's like the hard part of this, right? There's always
there are always ways in which the global economy is interdependent and very collaborative and then there's always ways in which it's a competition and
it's so tempting to say it's all one or the other. Reality is always messier
the other. Reality is always messier than that. Um
than that. Um but I do think it's important to look at the net differences in power and the impact that
could have. Um,
could have. Um, and I think it's, you know, really important that democracy leads with AI.
>> And when you say democracy leads with AI, you mean China should not lead with AI.
>> Um, it was a little bit more nuanced of a comment than that, but if you want to reduce it to that.
>> No, because in in in your speech yesterday, you mentioned uh authoritarian, you know, a couple of times. So, I was just kind of, you know,
times. So, I was just kind of, you know, how do we think about China? How do we unpack China as well? What I meant more is I don't think there should be any single super intelligence in the world.
I don't think there should be any one person or anyone country or anyone company in charge of super intelligence including the United States. I think
that would also be bad. Um,
I think the world is at its best when power is widely distributed, when people have a lot of different ideas, and when there's enough of a balance in power
that we can all we countries, individuals companies whatever sort of keep each other in check and all have input and over time, even though we'll do some dumb things along the way, you
know, people kind of get to pick the best ideas and let those rise up. So
what I meant by authoritarian here is you don't literally want one AI in charge of the world no matter who has it. I think that would be bad.
it. I think that would be bad.
>> Yeah. And do you think if you compare to the big tech era of 7 years ago uh AI is going to fragment power more or is it going to concentrated more because it seems to be that big tech the old big
tech is equally sort of staying uh you know staying it's keeping pace.
That's a good that is at some point it's in some sense I think one of the most important questions in front of us right now. You can totally imagine a world
now. You can totally imagine a world where AI massively concentrates power where say a single company or country is able to hold on to AI and use that to
amass a gigantic amount of power and wealth in the world. You can imagine another extreme where everybody on Earth has a super intelligence with no rules
whatsoever and you know some pretty terrible things and chaos happen. Um
you can imagine many worlds there in the middle. I personally think something way
middle. I personally think something way more towards the democratized version of that is good. We of course will need some regulation. We of course will need
some regulation. We of course will need some guardrails. But I think putting
some guardrails. But I think putting this technology in the hands of lots of people is possible. And that will be a decentralization of economic power. The
the clearest example of this you can see uh is in the last six and especially 3 months the power that a one or two or
three person company can have.
This just didn't happen uh a few years ago. It was impossible.
ago. It was impossible.
In this new world of things like codecs and general purpose knowledge working agents, um we're going to have to get used very quickly to one person startups
that have enormous amounts of success or two person, threeperson startups. And I
think this is great.
>> Yeah. And how competitive is how competitive is all of this from your seat? I mean, you know, we saw a little
seat? I mean, you know, we saw a little awkward moment uh on stage yesterday.
Um, and there's several memes about it going around, but >> you got to give the internet something to laugh at. [laughter] Um,
it is definitely competitive, but I will say that I think >> it's also very incestuous. I mean, you guys, because all you know, a lot of people were building out stuff or working with you at one point and everybody knows everybody and
everybody's like Nvidia's investing in you. You're buying Nvidia chips and then
you. You're buying Nvidia chips and then it's just there.
>> It is. It is a weirdly small world for sure. Um,
sure. Um, I think it's very competitive commercially, but almost all of the efforts that have a very advanced model, I think, fuel the gravity of what's
happening and they're very committed to getting uh, safety and alignment right and willing to cooperate there.
>> A little bit more about what happened yesterday if you want to share. This is
a good opportunity.
>> I I don't really have that much more to add.
>> I won't push it. Okay. Uh, you know, there's just so much to cover, Sam, and I learned so much. I I'm very grateful for you because I learned so much prep to prep for this conversation. I was
like 3 days of just like deep into it.
So, I thought I'll just kind of break it up into a game.
>> Great.
>> And we can get, you know, I can just try to cover more and more stuff rather than having a conversation about it. Um, and
we'll see what we kind of do. So, it's
not a rapid fire because it's just too long to be a rapid fire and I tried to make it shorter, but I couldn't. Um,
is there any one thing you admire about Google's catchup in the AI race? Because
10 years ago they were out of the water.
>> Well, 10 years ago, 10 years ago, Google was the only serious AI effort.
>> Five years ago, maybe.
>> Um, maybe 3 years ago when we launched Chachi BT, they were then way out of it.
But the the first thing I admire about Google is Demis and the Google team started working on AI before anyone else in the modern era with a lot of conviction and uh you know I think
without their inspiration we certainly wouldn't be here. So that's one thing I admire and then a second thing more recently is their relentless focus and
execution and ability to really scale the model after being pretty far behind is quite impressive.
>> Okay. Were you surprised when Apple partnered with Google for Siri?
>> Not particularly.
>> Okay. The one country that is by and large on the right path for regulation on AI.
>> I don't think any of us know the answer to that yet. A thing that I'm happy about is countries are trying different approaches and I think this is a good thing about many sovereign powers in the
world. We will get to observe over the
world. We will get to observe over the next few years very different approaches and see what works and what doesn't and pretty quickly I believe the world will move towards more of what works. But I'm
grateful for the experimentation and different countries trying different things.
>> Okay. So short game that I'll give you some criticisms of AI and you give me a short defense of each of these criticisms. So the first >> I might not have one for some. That's
fine. And and there's so many questions that you can always duck or pass. um too
much concentration of power >> fair unless we push super hard to democratize and I think everyone in the world needs to hold countries and companies to this threshold of you don't
get to say concentration of power in the name of safety we don't want that trade it's got to be democratized >> okay uh the amount of natural resources that are going into the data centers the
amount of water the amount of >> water is totally fake uh it used to be true We used to do evaporative cooling in data centers, but now that we don't do that, you know, you see these like
things on the internet where don't use chatbt, it's 17 gallons of water for each query or whatever. This is
completely untrue. Totally insane. No,
no connection to reality. What is fair though is the energy consumption, not per query, but in total because the world is now using so much AI is real
and we need to move towards nuclear or wind and solar very quickly. There was a you know we had Bill Gates last year and I asked him this question and he gave a very interesting statistic. At that time
apparently we calculated that there were 10 iPhone worth of battery would go for every chat GP query and now it's come down to one iPhone or one and a half iPhone battery life worth of energy for
every chat GPT query. Is that correct?
>> There's no way it's anything close to that much.
>> It's much less than that.
>> Way way way less.
>> Okay. All right. But but his theory was that that the AIs will learn from human evolution to be more efficient and how how much energy they consume.
>> One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query. But it
also takes a lot of energy to train a human. It takes like 20 years of life
human. It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to
get eaten by predators and learned how to like figure out science and whatever to produce you and then you took whatever you you know you took. So the
fair comparison is if you ask ChachiT a question, how much energy does it take once its model is trained to answer that question versus a human? And probably AI has already caught up on an energy
efficiency basis. Measured that way.
efficiency basis. Measured that way.
>> Wow. Okay. Fant fantastic. AI is making my kids dumber.
>> True for some kids. Um,
look, when I when I hear kids talk about AI, uh, there are definitely some kids who are like, "This is great. I cheated my way
through all of high school. I never did any homework. Thank you." And I'm like,
any homework. Thank you." And I'm like, "What's your plan for like the rest of your life?" And they're like, "Well, I
your life?" And they're like, "Well, I assume I can like still use CHBT to do my job. This is very bad. Um, we
my job. This is very bad. Um, we
absolutely have to still teach our kids to to learn and to think and to be creative and to use these tools. That's
not what most kids say, though. Most
kids say, uh, I can't believe what I can accomplish now. Um, look at this thing
accomplish now. Um, look at this thing that I've just made. I've built these incredible new workflows. You know,
sure, like I may use Chat GBT and the way you use Google when you were in high school to help with your homework. Um,
but now that I have this new tool, I'm doing these amazing things. I think we will need to find new ways to teach and evaluate in school to make sure every kid is brought along. But the potential
of this technology, the ability to learn more, do more. I have no doubt about that. Um, when I was in school, Google
that. Um, when I was in school, Google had just come out and uh the like my middle school teacher was like, "This is the worst thing ever."
You know, there's no point to teaching anymore. Why do you have to memorize,
anymore. Why do you have to memorize, you know, the date that someone in history was born if you could just look it up on Google? And my answer was, I think it's a complete waste of time to memorize what year someone was born. I
will just go look that up on Google if I ever need to know it again, which usually you don't that often. And then I watched over the next few years, teachers come to peace with this, the
education system evolve. And like always happens with new tools, the the potential goes up and expectations go up and we'll have to teach people to
think harder, create more. And you know, I'm pretty sure that like a kid born today when they're 18, graduating high school will be able to do things that no one today can. And I think that's great.
The other criticism is that it's just not a democratic enough as in the people who want to pause AI don't have a voice.
Uh and I want to take this double click on this on this question is that where do you see the resistance coming from?
Uh is it coming from the people who have experienced technology at its highest levels and then they say we're nervous we want to pause or do you see the resistance coming from those who have never experienced and are nervous to
experience and then they feel the the vulnerability?
I I mean it kind of comes from parts of it come from everywhere. Although as
more people use the technology, I think there's less of a let's totally pause it. People like it and there's more of a
it. People like it and there's more of a what is this going to mean? Can we have more of a voice? Can this go a little more slowly?
>> Okay. The one thing Silicon Valley should learn from Chinese technology.
>> Move fast. Okay. The one thing Silicon Valley can learn from Indian technology.
>> Move faster.
The one real and one imagined fear of Chinese dominance in AI.
I I think the imagined fear is that, you know, the Chinese military is going to send a billion humanoid robots marching through the streets and overpower the US army or something like that. And the
real fear is and and I think that's like an example of fighting the last war and thinking it's going to be the same. I
think the real fear is a new kind of war between nations that gets fought over the internet where you influence people and hack into a bunch of critical infrastructure and that feels totally possible.
>> That's a very scary uh reality where we say surveillance state all those kind of things. You don't
things. You don't >> super worried about that. Um,
I think increasingly so people are using fear of AI going wrong to justify a surveillance state and I don't think that people doing that have fully thought through the downsides
of the surveillance state.
>> Yeah. Okay. Um, you urgently need to look at you you urgently need to look look something up. Your phone is discharged. You borrow my phone. The
discharged. You borrow my phone. The
letter T on my type on my touchpad doesn't work. So you can only key in
doesn't work. So you can only key in Grock, Claw, Gemini, or Deep Seek.
[laughter] >> I guess if I have to look something up, I pick Gemini.
>> Okay. All right. Um the
>> for other things, I'd pick other models.
>> Okay. The most compelling reason you have moved from a not for-p profofit to a cap profit to pretty much any revenue, including ads.
um the need to democratize AI and the need to stay at the forefront of research.
Both of those require huge amounts of capital.
>> And do you think you're research first company or product first company?
>> Research first company. Uh the coolest thing about AI, I think, is that the great majority of making a good product is doing good research. We make a great model, the product is already is pretty
good. The early version of chatbt was
good. The early version of chatbt was almost no product at all. It was like a text box that fed into a model and then put the output on the screen. So
research to continue to push what a model is capable of is what enables everything else.
>> Okay. The one criticism about you that bothers you when you hear it.
Yeah, >> I've heard basically like every possible criticism and at some point none of them bother you because you're just like this gotten so insane, so detached from
truth. Um
truth. Um >> I guess the one that >> makes me the most sad is that like he's just doing it for the power. He doesn't
really care about sort of trying to do something that's going to be helpful to the world.
What what were you thinking when you when you decided not to take any equity in OpenAI? What what what was going in
in OpenAI? What what what was going in your mind at that time?
>> That that was truly one of the dumbest thing as I say [laughter] this as someone who's made a lot of dumb decisions. Um the conspiracy theories
decisions. Um the conspiracy theories that that brought about like the one I just mentioned uh it was totally not worth it. We were like a nonprofit. I
worth it. We were like a nonprofit. I
you know there was like a kind of quirk of it where to be on the board of a nonprofit you had to like be sort of disinterested and I had been like very fortunate to have a successful career
and I was like I don't really care here either way but it was a very dumb thing >> and if somebody today were to offer and figure out a way for you to get into the cap table with pred equity would you take it?
>> I mean at this point I feel like so tired of the whole conversation and so trapped I'm not sure what to do.
>> Okay.
>> It's like a lose-lose thing for me.
>> Okay. Um, so one thing in spite of all the differences you have with Elon Musk, the one thing you the one thing you admire about him. Is there any one thing you can put all the difference?
>> No, I'm going to think of something, but give me a minute.
He's he's extremely good at physical engineering and also extremely good at getting people to perform incredibly well at their jobs.
>> Okay, superb. Okay. Um, Chad Gubet does is not known for its opinions. It's
known for giving facts and and factful answers back. So, I'm going to ask your
answers back. So, I'm going to ask your opinion on the news. There've been a lot of stuff in the news the last week. So,
we want your views on would you support the governmentordered ban on social media for kids under 16.
Uh, I don't know if a total ban is right. Maybe it is. Uh, I think at least
right. Maybe it is. Uh, I think at least heavy restrictions would be good. I
I have a young child. I'm more
interested in the what I think are the very negative impacts of infinite scroll. Um I would like my kid not to
scroll. Um I would like my kid not to just be one of those iPad kids going like this all the time. So that seems very important. But yeah, probably
very important. But yeah, probably social media for kids is something we have to be very careful with.
>> Okay, it seems the clouds of war are again upon us. a lot of talk about in Iran. Um, if government were to ask to
Iran. Um, if government were to ask to use AI companies like yours to help figure out how to get into war, how do
you how would you how would you respond?
Is Pentagon just another client for you?
And you know, you're the they're the customer, you do what they ask, or do you draw a line somewhere?
>> Um, I don't think AI systems should be used to make a AI systems should be used to make war fighting decisions. I don't think
fighting decisions. I don't think they're at a level of sophistication and reliability where this is a good idea.
That said, like we certainly want to support the government and uh there's a lot of things we can do already. Um
someday there will be I think applications of this really important applications of this to defense. Um but
right now the models have clear limitations.
>> It was used in for Maduro's capture I believe. Is that is that true? Is that
believe. Is that is that true? Is that
true? I no I I just don't know like I I'm sure it was used in some ways. There
are things that AI can do a great job of today. Like I think using AI to analyze
today. Like I think using AI to analyze a huge amount of intelligence reports probably a great use of AI and maybe it was used some way like that.
>> Yeah. Your views on Claude safety chief Mang Sharma's dramatic resation. I don't
know if you followed that but it made a lot of news.
>> I heard something about it.
>> Okay.
>> Like they they resigned and like it's all useless or something.
>> Yeah. World's going to go to hell. I
want to go write poetry. It was that that direction. And I
that direction. And I >> a lot a lot of safety people are are getting I think overwhelmed. I mean
that's the real piece that I wanted you to reflect on.
>> The part of it I agree with is the inside view at the companies of looking what's going to happen like the world is not prepared. We're going to have
not prepared. We're going to have extremely capable models soon. It's
going to be a faster takeoff than I originally thought and that is stressful and anxietyinducing.
>> Okay. Uh reflect a bit on the future, the future of love and relationships. We
keep hearing of people using AI agents to to open up conversations on dating apps. Um how much of authenticity are we
apps. Um how much of authenticity are we losing with love and relationships or how much of does it come into?
>> Um my bet would be that in the future we value human relationships much more. I
think we are wired to do that. There
will be some people who will like fall in love with an AI or talk about their AI boyfriend or girlfriend or have AI like you know send messages on dating apps for them that there will be some small percentage of the world that does
that and there will be a lot of breathless articles written about the end of society and all of that but I think for almost everyone in a world with more abundance in a world where you
can kind of have anything human connection human attention human warmth will be one of the most valuable commodities and we I bet we will care much more about that in a world with AI.
>> Okay. You've invested in over 400 companies. Is that I don't know if
companies. Is that I don't know if that's accurate or >> I mean counting and Y common air like 3,000.
>> Wow. Okay. Well, if you take all the Y cominator investments, fair. Um but
given the speed of AI growth and the disruption, where would you advise people in this room to invest today? Is
there one place that you think they should?
The cool thing about a moment like this when the ground is shaking is sort of everything is up for grabs. Um
things are going to change so fast, things are changing so fast.
As we talked about earlier, you can have these oneperson companies doing the things that used to take much bigger companies, but they can move faster, much more efficiently. So there will be all of the do the existing things
better, and then there's a whole set of new things we just couldn't imagine before. Um, it feels like the investment
before. Um, it feels like the investment landscape is more open than it's ever been in any of my time as an adult. Uh,
there were many times before where it felt like there was only one thing to invest in. There's only one frontier and
invest in. There's only one frontier and now it feels like you can kind of do whatever whatever you want.
>> Okay. How far are we from ASI? You said
AGI were a few years away. How far are we from ASI?
>> No, I said from ASI we're a few years away. Um,
away. Um, >> so AGI how far then? I mean, AGI feels pretty close at this point.
>> Okay.
>> Like, if you had asked me, >> I think if you had asked most people six years ago, what would you think if we had systems
that could do new research on their own?
What would you think if we had systems that could make an entire complex computer program on their own that could do pretty sophisticated knowledge work in all these different fields? you know,
you could have one system that could act as an AI doctor, lawyer, a computer scientist would say, "Okay, that sounds pretty general and pretty intelligent."
Um, we get used to it, whatever we have.
Uh, but just watching how much the technology we already have is accelerating us internally, I would say it's pretty close. Um, and given what I now expect to be a faster takeoff, I
think super intelligence is not that far off. The one thing you'll never ask Chad
off. The one thing you'll never ask Chad GPT I think I would never ask it like how to be happy.
I would rather ask like a wise person then.
>> That's a really interesting because [applause] that's really interesting because uh one of the most uses of AI is for
companionship for you know for that that categorization of >> um to the extent of therapy and you know it's >> yeah I think like for sort of things
like therapy or life advice it can be pretty good. I think for like life
pretty good. I think for like life philosophy, I'm still not going to take it serious.
>> Okay. No. Wonderful. Okay. Wonderful.
Um, so which is less likely to happen?
That the uh TSMC loses its monopoly over world chip manufacturing or Musk and you become friends again?
Which one is which one is less likely >> or more likely?
>> Uh, I think Musk and I becoming friends again is less likely. I feel like I have more control over that one, >> but both very unlikely things.
>> What is it about TSMC that you know the the semiconductor Taiwan semic semiconductor corporation? What what is
semiconductor corporation? What what is it about it that makes it so I mean the whole world is kind of just >> it's they have just this relentless focus on getting better at what they do.
It's this incredibly complex process and they are just a machine at how they optimize and get better and learn at every level.
>> Okay. Um, the most recent update you can share with us about your partnership with Johnny Ies. You're talking about these new cool devices. It's going to change the form factor of a cell phone, how you interact with machines. Anything
you can share with us about that?
>> Um, we have basically used computers for the same way in for like 50 years. You know,
I think the work that Xerox Park did to invent all the concepts of Windows and pointing devices and everything that we now take for granted is amazing. Putting
it on a phone and adding multi-touch was also amazing. But fundamentally, it's
also amazing. But fundamentally, it's been, you know, a similar kind of idea.
And then AI came along and AI is stuck in that same form factor. But AI is like a quite different thing. AI, you can now talk to a computer in natural language and it can do incredibly complex things
for you. it can understand a huge amount
for you. it can understand a huge amount of context and the form factor of computers does not quite work for this.
You know, I want I want to use a piece of technology that is observing my whole life and that has all the context and is maximally useful to me and that has the that is in my way the least and that can
not just be on or off but these sort of subtle states in between. And
we may fail at this. This is very hard.
But I think we have a chance to make a new kind of products, a new family of products that were really designed around AI that are kind of participating in your life and not in the way of it.
>> So that's a theory, but what is the product? I mean,
product? I mean, >> you'll have to wait and see.
>> Do we get uh when can we see this?
Because it's sounding very re revolutionary. So
revolutionary. So >> um I mean we hope to talk be able to talk about it late this year, but >> Okay. Sure. Hardware is hard.
>> Okay. Sure. Hardware is hard.
>> Okay. Um
the one thing that governments should regulate and one thing they should avoid regulating.
This is not a competent answer.
Generally speaking, I think it's probably a good idea for governments to focus on regulating the really potentially catastrophic issues and
being more lenient on the less important issues until we understand them better.
>> Okay. Is there any one introspection or learning from your professional disagreements with any of the co-founders you've had?
>> It It's It's hard to reduce that to a single sound bite. Um,
one thing that Greg and I have disagreed on a lot over the years is how much the company should focus versus do a lot of things. And I think I've been wrong
things. And I think I've been wrong about trying to do so many things. And
I've really learned from him the importance of extremely narrow deep focus.
>> Okay. Interesting. One mistake, the biggest mistake you have seen corporates making in adopting AI Um, I was in a meeting yesterday uh
yesterday with a big company who was planning to uh spend 2026
strategizing 2027 getting the company ready and 2028 deploying.
And that may work for other kinds of technology. Apparently, if you do like a
technology. Apparently, if you do like a giant ERP migration, that's the kind of timeline it takes. Doing that for AI will be a catastrophic mistake. Um the
the the nimleness required, the speed, the commitment required is just totally different.
>> Okay. Okay. Uh get to the last piece here. Um following heads of state want
here. Um following heads of state want your opinion on one thing they need to do for their people's future in this new AI context. Xi Jinping, Narendra Modi,
AI context. Xi Jinping, Narendra Modi, Putin, Trump. What's the one thing you
Putin, Trump. What's the one thing you tell each of them that they should do?
>> I would say the same thing to everyone, which is you got to democratize this technology and put it in the hands of people. They're not all going to listen
people. They're not all going to listen to that, obviously.
[laughter] So, it's a fair answer, of course, but it's a safe answer. But is it is it a different thing that, you know, like a young country should do to an old country from a from a head of state
perspective? I mean
perspective? I mean >> I don't I mean I feel increasingly radicalized on this point. I don't think any other
strategy is going to work.
>> Okay, fair. Um the one if I can ask you to describe your relationship with Satya Nadella because he calls you he calls open a friend of me. Uh
>> I would just say friends.
>> Okay, cool. Your most expensive hire ever. You don't have to tell me who, but
ever. You don't have to tell me who, but how much money did it it cost? Because
>> we we we are no comparison to Meta here.
>> Okay. Okay. Which my last question now, which of these four statements that you have you have made and you have now upgraded or revised do you regret?
Statement one that India building a foundation model with $10 million is hopeless. Statement two that you will
hopeless. Statement two that you will remain a non for profit. You clarified
that. Yeah, I said anyone building the point was anyone building a foundation model for $10 million.
>> Okay, so that's statement one. Statement
two, you'll remain a not for profofit.
Statement three, too much AI regulation can stifle innovation. And statement
four, open AI won't accept advertising.
>> I don't think I ever said we won't accept advertising.
>> Sorry.
>> Um I don't think I ever said we won't accept advertising. I think maybe I said
accept advertising. I think maybe I said I had misgivings. I guess I'll pick the nonprofit one.
>> Okay. All right. Super. Um, it's been a great uh I've tried to cover as much as I could.
>> This has been a lot.
>> Um, >> you only gave me an hour, man.
[laughter] I would have wanted few.
We'll take a few questions from the audience. Yeah. Okay. Can I get a show
audience. Yeah. Okay. Can I get a show of hands who all are here for?
>> I would love another meme. So, if anyone wants to ask me about Indian foundation models, let's run it back.
>> Okay. Yeah. Um, can we Okay. Yeah. Can
we get Rajes please first? Rajes is the founder of Make My Trip. Uh he makes money when we are holidaying. [laughter]
>> All right. Uh thank you. Thank you
Anand. Uh and welcome again Sam. Uh
incredible one hour chat and and you did a great job by the way keeping yourself calm while Anand did a great job on provoking you hell of a lot. Um so
that's good job. We are by the way partners. I met Brad earlier today. I
partners. I met Brad earlier today. I
just have one question. It got triggered from your comment actually that you are a research first company right? Um and
if I think harder about that um the kick for researchers um the satisfaction that they get when they every time they go deep into
research and they strike gold right you know in terms of innovation in terms of something really crazy that they discover
do you think um um given the power of AI that's equal amount of focus focus needs to be on responsible AI as well.
>> Yes, I do think that and I think the one of the things I'm most proud of with our research team is how much they feel that as we get closer and closer to super intelligence. They of course feel it
intelligence. They of course feel it more and more but it's been a core part of our DNA from the very beginning and I think the researchers that succeed the most with us have that process going on
in their brains.
>> I think we have we don't have time. I
think the last question is I'm I'm getting some hand signals. Yeah. Okay.
Last question.
>> Let's do a few more. We'll go. I'll
answer fast.
>> The race for talent, u where do you think it's heading?
>> Um, as AI does more and more of research and engineering, I suspect teams will do much more with smaller amounts of
people. And so you'll have uh
people. And so you'll have uh I think I think the race for talent should ease somewhat. And you'll see small teams uh in research labs just do
a huge amount of work.
>> I have a question for actually I think he's asked you all the questions.
>> How much did OpenAI Chad GPD help you with the question? [laughter]
>> I I I I have to admit I I I use a lot of research but if you ask OpenAI for any interview to help you and give the questions I you know it it just feels very synthetic so I don't end up uh you
know using it yet. Uh,
that's good.
Sam, I don't Do we have time one more or no? Yes. Okay, cool. Can we actually see
no? Yes. Okay, cool. Can we actually see a show of hands? I'm not getting a sense. Can Can I do some uh sort of
sense. Can Can I do some uh sort of affirmative action and get a women some women hands please? Yes, please. Right
there. Thanks. Please.
>> Uh, hi Sam. This is Tuti. Um, before I ask you my question, I think I might have manifested this. I'm a podcaster and I asked Chad GBD.
>> Okay. It's not about it was a short question. Please.
question. Please.
>> So because this is what I manifested.
This is you and me sitting in your open AI 3 weeks ago and here I am. But my
question to you is as a creator of course we use a lot of chat GBT. But
what concerns you more? Will AI become too powerful or will humans become very passive? Especially for creators, you
passive? Especially for creators, you know, chat GPT like even an said that it sort of becomes very superficial when you rely only on that.
>> I don't think humans will become too passive. So, I guess I'll take the other
passive. So, I guess I'll take the other one. Uh, watching what creators are
one. Uh, watching what creators are doing in particular where they can use these tools to help have a quicker, tighter iteration loop from idea to feedback to idea to feedback. Um, I
think we'll get better stuff. I think
we'll get more creative podcasts, better writing, better images, but I don't think we're going to become passive in that process. I used to worry about
that process. I used to worry about that, but it doesn't seem like it's what's happening.
>> Okay, we I actually I've got a list of people we had promised questions to. So,
Mr. Abishek Han is here. Sorry, where is he?
>> No. Oh, there he is. Can we get a mic to him, please? Uh Hazel, is Hazel around?
him, please? Uh Hazel, is Hazel around?
Yeah, please go ahead.
>> Uh first of all, it's a very interesting session. Uh what I wanted to ask is
session. Uh what I wanted to ask is according to which profession because of uh chat GPD and AI will be most under threat.
>> Um I think a lot of professions will almost go away. A lot of the current ones we
go away. A lot of the current ones we will find new things to do for sure and people will adapt those. But if you think about
well I'll pick my own. uh I I was trained as a software engineer and the way I learned to write software is now effectively completely irrelevant. So it
doesn't mean there's no longer a software engineering job in the future.
But writing, you know, C++ code by hand, that's over. And I think there's a lot
that's over. And I think there's a lot of other professions where in the old sense they're over. There may be something totally new. There will be other professions that change almost not
at all, but big categories of jobs AI is just going to completely obsolete and we'll what those people do will have to completely adapt. Um
completely adapt. Um >> Sam, let's flip the question. What's the
least vulnerable job?
>> Well, there was a question about creators. One one thing that I think is
creators. One one thing that I think is interesting is when AI started generating images, people said, uh, graphic artists, that's over. that's
done. And that might be true for the kind of graphic artist job that was to to sort of like, you know, make someone's birthday card invitation or something. But for fine art, the price
something. But for fine art, the price of AI generated art is a zero and the price of human generated graphic art has continued to go up since this has happened. And so there are many things
happened. And so there are many things like that where we care about the person who does it. Um, you know, another example is like I really care about I
had to go to the hospital recently. I
really cared about the nurse that was taking care of me. If that were like a robot, I think I would have been pretty unhappy no matter how smart the robot was.
>> Okay. Hazel.
>> Hi Sam. I'm from Chitkara University.
Um, just taking from the previous question on responsible AI. How do you see your role like do you
AI. How do you see your role like do you invest in time putting in checks and balances you know and does the pace of the checks and balances are in the same
line as the you know the advancements that we are seeing in AI um so just around that thank you >> yeah it's a huge focus of ours as a
company and mine personally um sometimes we overdo it sometimes we're too conservative with a new technology or a new level sometimes If we put in too many checks, people get mad about
that. Uh there is a tension between
that. Uh there is a tension between democratizing AI and enough safety. But
I think our principle of starting conservative and then broadening access has served us well and we plan to continue to do that.
>> Sam, we're completely out of time. Sam,
thank you so much. Thank you so much for your time. Thank you for being here.
your time. Thank you for being here.
>> It's it's really really a privilege to have you with us.
>> Thank you. Please take your seats.
Please remain seated. I now invite Raj Kamill Ja, chief editor of the Indian Express to present a momentto. Request
everyone to please remain seated.
This illustration is by Indian Express cartoonist EPI and he wanted you to know that it is not AI generated.
>> [applause] >> I now invite Abhishek Kanan, managing director Radigu Ketan to present Sam with a bouquet.
>> Thank you. I request our other partners to also join on stage for a group photograph. Miss Hazel, Prasantraati,
photograph. Miss Hazel, Prasantraati, Rajes Nambi, Anupama Sharma and Koreel Lahiri. Please join us on stage for a
Lahiri. Please join us on stage for a grow photograph. I request everyone else
grow photograph. I request everyone else to please remain seated.
Please remain seated till Sam moves out.
We would appreciate everyone to please remain seated.
Please remain seated.
Thank you everyone for joining us today.
I would like to thank our partners.
Presenting partner radical ketan co-presented by 361 associate partner IMS Gazyabad Chitkara University
Hyderabad FRR immigration and broadcast partner NDTV.
Loading video analysis...