Sam Altman: How OpenAI Wins, AI Buildout Logic, IPO in 2026?
By Alex Kantrowitz
Summary
Topics Covered
- Code Red Beats Competitors
- Redesign AI-First Beats Bolting On
- AI Memory Surpasses Human Limits
- Knowledge Tasks Already Expert-Level
- Compute Fuels Science Discoveries
Full Transcript
You know that 1.4 trillion you mentioned, we'll spend it over a very long period of time. I wish we could do it faster. I think it would be great to
it faster. I think it would be great to just lay it out for everyone once and for all how those numbers are going to work. Exponential growth is usually very
work. Exponential growth is usually very hard for people. OpenAI CEO Sam Alman joins us to talk about OpenAI's plan to win as the AI race tightens, how the
infrastructure math makes sense, and when an OpenAI IPO might be coming. And
Sam is with us here in studio today.
Sam, welcome to the show.
>> Thanks, for, having, me., So,, OpenAI, is, 10 years old and crazy to me.
>> Chachi, PT, is, three,, but, the, competition is intensifying. Um, this place we're at
is intensifying. Um, this place we're at OpenAI headquarters was in a code red is in a code red. Um, after Gemini 3 came out and everywhere you look, there are
companies that are trying to take a little bit of OpenAI's advantage. And
for the first time I can remember, it doesn't seem like this company has a clear lead. So I'm curious to hear your
clear lead. So I'm curious to hear your perspective on how open AI will emerge from this moment and when first of all on the code red point we view those as
like relatively low stakes somewhat frequent things to do. Uh I think that it's good to be paranoid and act quickly when a potential competitive threat emerges. This has happened to us in the
emerges. This has happened to us in the past that happened earlier this year with Deepseek. Um and
with Deepseek. Um and >> there, was, a, code, red, back, then, too.
>> Yeah., There, there's, there's, a, saying about pandemics which is something like when when a pandemic starts every bit of action you take at the beginning is worth much more than action
you take later and most people don't do enough early on and then panic later and certainly saw that during the covid pandemic. Um
pandemic. Um but I sort of think of that philosophy as how we respond to competitive threats. Uh and you know it's I think
threats. Uh and you know it's I think it's good to be a little paranoid.
Gemini, 3, has, not, or, at least, has, not, so far had the impact we were worried it might but it did in the same way the Deepse seek did identify some weaknesses in our product offering strategy and
we're addressing those very quickly. I
don't think we'll be in this code red that much longer. Uh you know like these are not these are historically these have been kind of like six or eight week things for us. Um
but I'm glad we're doing it. Uh just
today we launched uh a new image model which is a great thing and that's something consumers really wanted. Um
last week we launched 5.2 which uh is going over extremely well and growing very quickly. Uh, we'll have a few other
very quickly. Uh, we'll have a few other things to uh launch and then we'll also have some continuous improvements like speeding up the service. But, you know I think this is like my guess is we'll
be doing these once maybe twice a year for a long time and that's uh part of really just making sure that we win in our space. Um, a lot of other companies
our space. Um, a lot of other companies will do great too and I'm happy for them. But, you know, CatchBT is still uh
them. But, you know, CatchBT is still uh by far by far the dominant uh chatbot in the market and I expect that lead to increase not decrease over time.
Um, the the models will get good everywhere but a lot of the reasons that people use a product, consumer or enterprise, uh have much more to do than just with the
model. And we've, you know, been
model. And we've, you know, been expecting this for a while. So we try to build the whole cohesive set of things that it takes to make sure that we are you know the product that people most
want to use. Um I think competition is good. It pushes us to be better. Uh but
good. It pushes us to be better. Uh but
I think we'll do great in chat. I think
we'll do great in enterprise and in the future years. Other new categories I
future years. Other new categories I expect we'll do great there too. I I
think people really want to use one AI platform. People use their phone at
platform. People use their phone at their personal life and they want to use the same kind of phone at work most of the time. We're seeing the same thing
the time. We're seeing the same thing with AI. Uh the strength of chatgbt
with AI. Uh the strength of chatgbt consumer is really helping us win the enterprise. Uh of course enterprises
enterprise. Uh of course enterprises need different offerings but people think about okay I know this company open and I know how to use this chat GPT interface. Um so the strategy is make
interface. Um so the strategy is make the best models build the best product around it and have enough infrastructure to serve it at scale.
>> Yeah, there, is, an, incumbent, advantage., uh
chat I think earlier this year was around 400 million weekly active users.
Now it's at 800 million reports say approaching 900 million. Um but then on the other side you have distribution advantages at places like Google. And so
I'm curious to hear your perspective if the models do you think the models are going to commoditize? And if they do what matters most? Is it distribution?
Is it how well you build your applications? Is it something else that
applications? Is it something else that I'm not thinking of? I don't think commoditization is quite the right framework to think about the models.
There will be areas where different models excel at different things. For
the kind of normal use cases of chatting with a model, maybe there will be a lot of great options. For scientific
discovery, you will want the thing that's right at the edge that is optimized for science perhaps. Um so
models will have different strengths and the most economic value I think will be created by models at the frontier and we plan to be ahead there. Um and we're
like very proud that 52 is the best reasoning model in the world and the one that scientists are having the most progress with but also um we're very proud that it's what enterprises are saying is the best at all the tasks that
a business needs to to you know do its work. Um
work. Um so there will be you know times that we're ahead in some areas and behind in others but the overall most intelligent model I expect to have uh significant
value even in a world where free models can do a lot of the stuff that people that people need. The the products will really matter. Distribution and brand as
really matter. Distribution and brand as you said will really matter. Um in
chatbt for example personalization is extremely sticky. People love the fact
extremely sticky. People love the fact that the model, get to know them over time, and you'll see us push on that uh much much more. Um
people have experiences with these models that they then really kind of associate with it. Uh, and you I remember someone telling me once like
you kind of pick a toothpaste once in your life and buy it forever or most people do that apparently. Um and people talk about it. They have one magical experience with ChachiPT.
Healthcare is like a famous example where people put their um you know they put a blood test into Chachi or put the symptoms in and they figure out they have something and they go to a doctor and they get cured of something they
couldn't figure out before. Like those
users are very sticky. Uh to say nothing of the personalization on on top of it.
Um there will be all the product stuff. uh
we just launched our browser uh recently and I think that's pointing at a new uh you know pretty good potential mode for
us. Uh the devices are further off but
us. Uh the devices are further off but I'm very excited to to do that. So I
think there'll be all these pieces and on the enterprise uh what creates the the mode or the competitive advantage um I expect that to be a little bit different but in the same way that personalization to a user is very
important in consumer there will be a similar concept of personalization to an enterprise where a company will have a relationship with a company like ours uh
and they will connect their data to that and you'll be able to use a bunch of agents from different companies running that and it'll kind of like make sure that information is handled the right
way and I expect that'll be pretty sticky too. Um we already have more than
sticky too. Um we already have more than uh a million people think of us largely as a consumer company but we have >> we're, going, to, definitely, get, into enterprise.
>> Yeah., You, know, like >> share, the, stat.
>> Well, actually >> a, million >> we, have, more, than, a, million, enterprise users but we have like just absolutely rapid adoption of the API. Um and like
the API business grew faster for us this year than even Chad GPT >> really., Um, so, the, enterprise, stuff, is
>> really., Um, so, the, enterprise, stuff, is also you know it's really happening starting this year. Can I just go back to this
this year. Can I just go back to this maybe if commoditization is not the right word model some maybe parody for everyday users >> uh, because, you, you, started, off, your
answer saying okay maybe um everyday use it will feel the same but at the frontier it's going to feel really different. Um when it comes to chat
different. Um when it comes to chat GPT's ability to grow um if I'll just use Google as an example. If Chat GPT uh and Gemini are built on a model that
feels similar for everyday uses, how big of a threat is the fact that you know Google has all these surfaces through which it can push out Gemini whereas Chat GPT is is fighting for every new user.
>> I, I, think, Google, is, still, a, huge, threat uh you know extremely powerful company.
If Google had really decided to take us seriously in 200 23, let's say, we would have been in a really bad place. I think they would
have just been able to smash us. Um, but
their AI effort at the time was kind of going in not quite the right direction productwise. They didn't, you know, they
productwise. They didn't, you know, they had their own code red at one point, but they didn't take it that seriously.
Everyone's doing code reds out here.
>> Um,, and, then >> and, also, Google, has, probably, the greatest business model in the whole tech industry.
Um, and I think they will be slow to give that up. Um, but bolting AI into web search, I don't I may be wrong.
Maybe like drinking the Kool-Aid here. I
don't think that'll work as well as reimagining the whole, this is actually a broader trend I think is interesting.
Bolting AI onto the existing way of doing things, I don't think is going to work well as redesigning stuff in the sort of like AI first world. was part of why we wanted to do the consumer devices in the first place, but it applies at
many other levels. Um, if you stick AI into a messaging app that's doing a nice job summarizing your messages and drafting responses for you, that is definitely a little better. But I don't think that's the end state. That is not
the idea of you have this like really smart AI that is like acting as your agent, talking to everybody else's agent and figuring out when to bother you when not to bother you, and how to, you know, what decisions it can handle and
when it needs to ask you. So
similar things for search, similar things for like productivity suites. I
suspect it always takes longer than you think but I suspect we will see new products in in the major categories that are just totally built around AI rather
than bolting AI in. And I think this is a weakness of Google's even though they have this huge distribution advantage.
>> Yeah,, I', I've, spoken, with, so, many, people about this question. uh when Chetchup PT came out initially, I think it was Bendic Devon that suggested you might not want to put AI in Excel. You might
want to just reimagine how you use Excel. And to me, in my mind, that was
Excel. And to me, in my mind, that was like you upload your numbers and then you talk to your numbers. Well, one of the things people have found as they've developed this stuff is there needs to be some sort of backend there.
>> So,, is, it, that, you, sort, of, build, the backend and then you interact with it with AI as if it's a new software program?
Yeah, that's kind of what's happening.
>> Why, wouldn't, you, then, be, able, to, just bolt it on on top?
>> Yeah,, I, mean,, you, can, bolt, it, on, on, top, but the >> I, spent, a, lot, of, my, day, in, various messaging apps including email, including text, Slack
whatever. I think that's just the wrong
whatever. I think that's just the wrong interface. So, you can bolt AI on top of
interface. So, you can bolt AI on top of those, and again, it's like a little bit better, but what I would rather do is just sort of like have the ability to say in the morning, here are the things I want to get done today. Here's what
I'm worried about. Here's what I'm thinking about. Here's what I'd like to
thinking about. Here's what I'd like to happen. I do not want to be I do not
happen. I do not want to be I do not want to spend all day messaging people.
I do not want you to summarize them. I
do not want you to show me a bunch of drafts. Deal with everything you can.
drafts. Deal with everything you can.
You know me. You know these people. You
know what I want to get done. Um and
then you know like batch every couple of hours updates to me if you need something. But that's a very different
something. But that's a very different flow than the way these apps work right now.
>> Yeah., And, I, was, going, to, ask, you, what ChachiBT is going to look like in the next year and then the next two years.
Is that kind of where it's going?
>> To, be, perfectly, honest,, I, expected, by this point Chachi BT would have looked more different than it did at launch.
>> What, did, you, anticipate?, I, didn't, know.
I just thought like that chat interface was not going to go as far as it turned out to go. H like we I mean it was put up it looks better now, but it is broadly
similar to when it was put up as like a research preview. was not even meant to
research preview. was not even meant to be a product. We knew that the text interface was very good, you know, like the everyone's used to texting their friends and they like it. Um, the chat
interface was very good, but I would have thought to be as big and as significantly used for real work of a
product as what we have now, the interface would have had to go much further than it has now. I still
think it should do that but there is something about the generality of the current interface that I underestimated the power of. Um
what I think should happen of course is that um AI should be able to generate different kinds of interfaces for different kinds of tasks. So if you are talking about
of tasks. So if you are talking about your numbers it should be able to show you that in different ways and you should be able to interact with it in different ways. Um
different ways. Um it and we have a little bit of this with features like canvas. It should be way more interactive. It's like right now
more interactive. It's like right now you know, it's kind of a back and forth conversation. It'd be nice if you could
conversation. It'd be nice if you could just be talking about an object and it could be continuously updating. You have
more questions, more thoughts, more information comes in. Um, it'd be nice to be more proactive over time where it maybe does understand what you want to get done that day and it's continuously
working for you in the background and send you updates. And you see part of this the way people are using codecs which I think is one of the most exciting
things that happened this year is codecs got really good. Uh and that points to a lot of what I hope the shape of the
future looks like. Um
but it is surprising to me. I was going to say embarrassing but it's not. I mean
clearly it's been super successful. Uh
it is surprising me how little CHBT has changed over the last three years.
>> Yep., It, the, interface, works.
>> Yeah.
>> But, I, guess, what, the, guts, have, changed and you talked a little bit about how personalization is big uh to me and I think this has been one of your preferred features too. Memory has been
a real difference maker. Um, I've been having a conversation with ChachiPT about a forthcoming trip that has lots of planning elements for weeks now and I can just come in in a new window and be
like, "All right, let's pick up on this trip." And it it has the context and it
trip." And it it has the context and it knows knows the guide I'm going with. It
knows what I'm doing. Uh, the fact that I've been like planning fitness for it and can really synthesize all of those things. How good can memory get? I think
things. How good can memory get? I think
we have no conception because the human limit like even if you have the world's best personal assistant they don't they can't remember every word you've ever said in your life. They
can't have read every email. They can't
have read every document you've ever written. They can't be you know looking
written. They can't be you know looking at all your work every day and remembering every little detail. They
can't be a participant in your life to that degree. And no human has like
that degree. And no human has like infinite perfect memory.
Um, and AI is definitely going to be able to do that. And we actually talk a lot
do that. And we actually talk a lot about this, like right now, memory is still very crude, very early. We're in
like the, you know, the GBT2 era of memory.
But what it's going to be like when it really does remember every detail of your entire life and personalized across all of that and not just the facts, but like the little small preferences that
you had that you maybe like didn't even think to indicate, but the AI can pick up on. Uh
up on. Uh I think that's going to be super powerful. That's one of the features
powerful. That's one of the features that still maybe not 2026 thing, but that's one of the parts of this I'm most excited for.
>> Yeah., I, was, speaking, with, a neuroscientist on the show and he mentioned that you don't you can't find thoughts in the brain. Like the brain doesn't have a place to store thoughts but computing there's a place to store
them. So, you can keep all of them. And
them. So, you can keep all of them. And
as these bots do keep our thoughts, um of course there's a privacy concern. And
but the other thing is something that's going to be interesting is we'll really build relationships with them. I think
it's, been, one, of the, more, underrated things about this entire moment is that people have felt that these bots are their companions, are looking out for them. Um, and I'm curious to hear your
them. Um, and I'm curious to hear your perspective. Um, when you think about
perspective. Um, when you think about the, level, of, I, don't know, if, intimacy, is the right word, but companionship people have with these bots, um, is there a
dial that you can turn to be like, oh let's make sure people become really close with these things, or, you know we turn the dial a little bit further and there's an arms distance uh, between
them and and if there is that dial >> how, do, you, modulate, that, the, right, way?
There are definitely more people than I realize that want to have, let's call it close companionship. You I don't know what the right word is like.
Relationship doesn't feel quite right.
Companionship doesn't feel quite right.
I I don't know what to call it, but they want to have whatever this deep connection, with, an AI., There, there, are more people that want that at the current level of model capability than I
thought. And there's like a whole bunch
thought. And there's like a whole bunch of reasons why I think we underestimated this, but at the beginning of this year it was considered a very strange thing to say you wanted that. Maybe some a lot of people still don't revealed
preference.
You know, people like their AI chatbot to get to know them and be warm to them and be supportive and there's value there even for people who in some cases even for people who say they they don't
care about that uh still have a preference for it. I
I think there's some version of this which can be super healthy and I think you know adult users should get a lot of choice in where on the spectrum they want to be. There are definitely versions of it that seem to me unhealthy
although I'm sure a lot of people will choose to do that. Um and then there's some people who definitely want the driest most effect efficient tool
uh possible. So I suspect like lots of
uh possible. So I suspect like lots of other technologies we will run the experiment. We will find that there's unknown unknowns, good and
bad about it. And society will over time figure out how to how to think about where people should set that dial and then people have huge choice and set it in very different places.
>> So, your, your, thought, is, allow, people basically to determine this.
>> Yes,, definitely., But, I, I, don't, think, we know like how far it's supposed to go like how far we should allow it to go.
We're we're going to give people quite a bit of personal freedom here. Um there
are examples of things that uh we've talked about that you know, other services will offer, but we we won't. Um like we're not going to let we're not going to have RAI, you
know, try to convince people that should be like in an exclusive romantic relationship with them, for example.
got to keep it open.
>> But, I'm, sure, that, will, No,, I'm, sure, that that will happen with other services, I guess. Yeah, because the stickier it is
guess. Yeah, because the stickier it is the more money that service makes. The
whole all these possibilities kind of they're a little bit scary when you think about them a little bit deeply.
>> Totally., This, is, one, that, really, does that I personally, you know, you can see the ways that this goes really wrong.
>> Yeah., Uh,, you, mentioned, Enterprise.
Let's talk about Enterprise. you were at a lunch with some editors and CEOs of some news companies in New York last week and told them that enterprise is going to be a major priority uh for
OpenAI next year.
>> U, I'd, love, to, hear, a, little, bit, more about um why that's a priority, how you think you stack up against anthropic. I
know people will say this is a pivot for OpenAI that has been consumer focused.
So just give us an overview about the enterprise plan. Our strategy was always
enterprise plan. Our strategy was always consumer first. Uh there were a few
consumer first. Uh there were a few reasons for that. One, the models were not robust and skilled enough uh for most enterprise uses and now now they're
they're getting there. The second was we had this like clear opportunity to win in consumer and those are rare and hard to come by and I think if you win in consumer it makes it massively easier to
win in enterprise and we are we are seeing that now. Um but as I mentioned earlier this was a year where we enterprise growth outpaced consumer growth. Uh and given where the models
growth. Uh and given where the models are today where they will get to next year we think this is the time where we can
build a really significant enterprise business quite rapidly. I mean I think and we already have one but it can it can grow much more. Um
companies seem ready for it. The
technology seems ready for it. the, you
know, coding is the biggest example so far, but there are others that are now growing, other verticals that are now growing very quickly. And we're starting to hear enterprises say, you know, I
really just want an AI platform.
>> Which, vertical, company?
>> Um,, finance, science, is, the, one, I'm, most excited about of everything happening right now. Personally, um, customer
right now. Personally, um, customer support is doing great. Uh
but but yeah the the we have this thing called GDP though.
>> I, was, going, to, ask, you, about, that., Can, I actually throw my question out about that?, All right., Cuz, I, wrote, to, Aaron
that?, All right., Cuz, I, wrote, to, Aaron Levy the CEO of Box and I said I'm going to meet with Sam. What should I ask him?
He goes throw a question out about GDP val. Right. So this is the measure of
val. Right. So this is the measure of how AI performs in knowledge work tasks.
And I said okay. I went back to the release of GPT 5.2 to the model that uh you recently released and looked at the GDP valid chart. Now this of course is
an open AI evaluation. Um that being said the uh GPT5 thinking model so this is the model released in the in the summer. It ti it tied uh knowledge
summer. It ti it tied uh knowledge workers at 38% of test or tied beat or tied
>> um, GP, so, 38.8%, GPT, 5.2, 2, thinking, beat or tied at 70.9% of knowledge work tasks and GPT 5.2 pro
74.1% of knowledge work tasks and it passed the threshold of um being expert level it it handled it looks like something
like 60% of expert tasks uh of tasks that would make it you know on par with an expert in the knowledge work. What
are the implications of the fact that these models can do that much knowledge work? So, you know, you were asking
work? So, you know, you were asking about verticals, and I think that's a great question, but the thing that was going through my mind and why I kind of was stumbling a little bit is that Eval I think it's like 40 something different
verticals that a business has to do.
>> There's, make, a, PowerPoint,, do, this, legal analysis, you know, write up this little web app, all this stuff.
>> And, and, the, eval, is, do, experts, prefer the output of the model relative to other experts for a lot of the things that a business has to do. Now, these are small, well
scopeed tasks. These don't get the kind
scopeed tasks. These don't get the kind of complicated, open-ended, creative work that, you know, figuring out a new product. These don't get a lot of
product. These don't get a lot of collaborative team things. But
a co-orker that you can assign an hour's worth of tasks to and get something you like better back 74 or 70% of time if you want to pay less is still pretty extraordinary. If you went back to the
extraordinary. If you went back to the launch of Chat TBT 3 years ago and said we were going to have that in 3 years most people would say absolutely not.
Um, and so as we think about how enterprises are going to integrate this it's no longer like just that it can do code. It's all of these knowledge work
code. It's all of these knowledge work tasks you can kind of farm out to the AI. uh and
AI. uh and that's going to take a while to really kind of figure out how enterprises integrate with it but should be quite substantial. I know you're not an
substantial. I know you're not an economist, so I'm not going to ask you like what is the macro impact on jobs but let me just read you one uh line that I heard uh you know in in terms of how this impacts jobs uh from Blood in
the Machine on Substack. Um this is from a technical copywriter. They said
"Chatbots came in and made it so my job was managing the bots instead of a team of reps." Okay, that that to me seems
of reps." Okay, that that to me seems like it's going to happen often. But
then this person continued and said once the bots were sufficiently trained up to offer good enough support then I was out. Um is that is that the is that
out. Um is that is that the is that going to become more common? Is that
what bad companies are going to do?
Because if you have a human who's going to be able to sort of orchestrate a bunch of different bots then you might want to keep them. I don't know. How do
you think about this? So I I agree with you that it's clear to see how everyone's going to be managing like a lot of AI uh doing different stuff. Um
eventually like any good manager hopefully your team gets better and better but you just take on more scope and more responsibility. I am not I am not a jobs dumer.
Um short term I have some worry. I think
the transition is likely to be rough uh in some cases but we are so deeply wired to care about
other people what other people do. We
are so we seem to be so focused on relative status and always wanting more and to be of use and service to express creative spirit whatever whatever
whatever has driven us this long. I
don't think that's going away. Now I do think the jobs of the future or I don't even know if jobs is the right word.
Whatever we're all going to do all day in 2050 probably looks very different than it does today. Um
but I but I I don't have any of this like oh life is going to be without meaning and the economy is going to totally break. Like we will find I hope
totally break. Like we will find I hope much more meaning and the economy I think will significantly change but I I think you just don't bet against
evolutionary biology. Um
evolutionary biology. Um you know I think a lot about how we can automate all the functions at OpenAI and then even more than that I think about like what it means to have an AI CEO of Open AI. Doesn't bother me. I'm like
Open AI. Doesn't bother me. I'm like
thrilled for it. I won't fight it. Uh
like I don't want to be I don't want to be the person hanging on being like I can do this better the the handmade way.
>> AI, CEO, just, make, a, bunch, of, decisions, to sort of like direct all of our resources to giving AI more energy and power. It's
like >> um, I, mean, no, you, would, really, put, a guard rail on >> Yeah., Like, obviously, you, don't, want, an
>> Yeah., Like, obviously, you, don't, want, an AI CEO that is not governed by humans but if you think about if if you think about maybe like
a this is a crazy analogy, but I'll give it anyway. If you think about a version
it anyway. If you think about a version where like every person in the world was effectively on the board of directors of an AI company and got to, you know, tell
the AI CEO what to do and fire them if they weren't doing a good job at that and, you know, got governance on the decisions, but the AI CEO got to try to like execute the wishes of the board.
Um, I think to people of the future that might seem like quite a reasonable thing. Okay, so we're going to uh move
thing. Okay, so we're going to uh move to infrastructure in a minute, but before we leave this section on models and capabilities, when's GP GPT6 coming?
Um, I expect I don't know when we'll call a model GPT 6. Uh
6. Uh but I would expect new models that are significant gains from 5.2 in the first quarter of next year.
>> What, does, significant, gains, mean?
I don't have like an eval score in mind for you yet but uh more enterprise side of things or definitely both the there will be a lot of improvements to the
model for consumers uh the main thing consumers want right now is not more IQ enterprises still do want more IQ so uh we'll improve the model in different ways for the kind of for different uses
but uh I our goal is a model that everybody likes much better >> so, infrastructure, you, have, 1.4, trillion thereabouts and commitments uh to build infrastructure. I've listened to a lot
infrastructure. I've listened to a lot of what you've said about infrastructure. Um here are some of the
infrastructure. Um here are some of the things you said. If people knew what we could do with compute, they would want way way more. You said the gap between what we could offer today versus 10x
compute and 100x compute is substantial.
Uh what what can you help flesh that out a little bit? What are you going to do with uh so much more compute?
Well, I mentioned this earlier a little bit. The thing I'm personally more
bit. The thing I'm personally more excited, most excited about is to use AI and lots of compute to discover new science. I am a believer that scientific
science. I am a believer that scientific discovery is the high order bit of how the world gets better for everybody. And
if we can throw huge amounts of compute at scientific problems and discover new knowledge, which the tiniest bit is starting to happen now, it's very early.
These are very small things but you know my learning in history of this field is once the squiggles start and it lifts off the x-axis a little bit we know how to make that better and better. Um but
that takes huge amounts of compute to do. So that's one area we're like
do. So that's one area we're like throwing lots of AI at discovering new science curing disease lots of other things. Um
things. Um a kind of recent cool example here is we built the Sora Android app using codecs and
they did it in like less than a month.
They used a huge amount. One of the nice things about working at OpenAI is you don't get any limits on codecs. They
used a huge amount of tokens, but they were able to do what would normally have taken a lot of people much longer and Codex kind of mostly did it for us. And
you can imagine that going much further where entire companies can build their products using lots of compute.
Um people have talked a lot about how video models are going to point towards these generated real-time generated user user interfaces. That will take a lot of
interfaces. That will take a lot of compute. Um
compute. Um enterprises that want to transform their business will use a lot of compute. uh
doctors that want to offer good personalized health care that are like constantly measuring every sign they can get from each individual patient. You can imagine
that using a lot of compute. Uh it it's hard to frame how much compute we're already using to generate AI output in the
world. Um but these are horribly rough
world. Um but these are horribly rough numbers. So, uh, and I think it's like
numbers. So, uh, and I think it's like undisiplined to talk this way, but I I always find these like mental thought experiments a little bit useful, so forgive me for the sloppiness. Um, let's
say that an AI company today might be generating something on the order of 10 trillion tokens a day out of Frontier
models. Um
models. Um you know, more, but not it's not like a a quadrillion tokens for anybody, I don't think. Um
don't think. Um let's say there's 8 billion people in the world and let's say on average someone's these are I think totally wrong but let's say someone the average number of tokens outputed by a person
per day is like uh 20,000.
You can then start to and the token you can to be fair then we have to compare the output tokens of a model provider today not not all the tokens consumed but you can start to look at this and
you can say hm we're going to have these models at a company be outputting more tokens per day than all of humanity put together and then 10 times that and then
100 times that. And you know, in some sense it's like a really silly comparison, but in some sense it gives a magnitude for like how much of the intellectual
crunching on the planet is like human brains versus AI brains. And that's kind of the relative growth rates there are are interesting. And so I'm wondering
are interesting. And so I'm wondering are do you know that there is this demand to use this compute like potentially like so for instance would we have surefires like scientific
breakthroughs if you know open AAI were to put double the compute towards science or or with medicine like are would we have you know that clear ability to assist doctors like
>> how, much, of, this, is, sort, of, uh supposition of what's to happen versus clear understanding based off of what you see today IC >> everything, everything, based, off, what, we see today is that it will happen. It
does not mean some crazy thing can't happen in the future. Someone could
discover some completely new architecture and there could be a 10,000 times you know efficiency gain and then we would have really probably overbuilt for a while. But everything we see right
now about how quickly the models are getting better at each new level, how much more people want to use them, each time we can bring the cost down, how much more people really want to use them. Um
them. Um everything about that indicates to me that there will be increasing demand and people using these for
wonderful things, for silly things. Um
but it it just so seems like this is the shape of the future. Um
it's not just like it's not just you know how many tokens we can do per day.
It's how fast we can do them as these coding models have gotten better. They
can think for a really long time but you don't want to wait for a really long time. So there will be other dimensions.
time. So there will be other dimensions.
It will not just be the number of tokens that that we can do. Um but the demand for intelligence across a small number of axes
and what we can do with those you know if you're like if you have like a really difficult healthcare problem do you want to use 5.2 or do you want to use 5.2 pro even if it takes dramatically more
tokens I'll go with the better model. I
think you will um can let's just try to go one level deeper. Um
going to the scientific discovery, can you give an example of like a scientist it doesn't have to well maybe it's one that you know today that's like I have problem X and if I put you know compute
Y towards it I will solve it but I'm not able to today. There was a thing this morning on Twitter where a bunch of mathematicians were saying they were all like replying to each other's tweets. Uh
they're like I was really skeptical that LM's were ever going to be good. 5.2 is
the one that crossed the boundary for me. it did it you know figured out this
me. it did it you know figured out this it with some help it did this small proof it it discovered this small thing but it's this is actually changing my workflow and then people were piling on saying yeah me too I mean some people
were saying 5.1 was already there not many >> um, but that that was like that's a very recent example this model's only been out for 5 days or something where people are like
all right you know the mathematic >> the, mathematics, research, community, seems to say like okay something important just happened >> I've, seen, Greg, Brockman, has, been highlighting getting all these different mathematical scientific uses in his feed
and something has clicked I think with 5.2 um among these communities. So it'll
be interesting to see what happens as as things progress.
>> We, don't like one of the hard parts about compute >> at, this, scale, is, you, have, to, do, it, so far in advance. So you know that 1.4 trillion you mentioned we'll spend it over a very long period of time. I wish
we could do it faster. I think there would be demand if we could do it faster. Um, but
faster. Um, but it just takes an enormously long time to build these projects and the energy to run the data centers and the chips and the systems and the networking and
everything else. Um, so that will be
everything else. Um, so that will be over a while, but you know, we from a year ago to now, we probably about tripled our compute. We'll triple
our compute again next year, hopefully again after that. um revenue grows even a little bit faster than that but it does roughly track our compute
fleet. Uh so we
fleet. Uh so we we have never yet found a situation where we can't really well monetize all the compute we have. Um if we had I think if we had you know double the compute we'd be at double the revenue
right now.
>> Okay, let's, let's, talk, about, numbers since you brought it up. Um revenue is growing. uh compute spend is growing but
growing. uh compute spend is growing but compute spend still outpaces revenue growth. Uh I think the numbers that have
growth. Uh I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between
now and 120 and 2028 29 where you're going to become profitable. Um so talk a little bit about like how does that change? Where does the turn happen? I
change? Where does the turn happen? I
mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually uh subsumes the training expense. So, that's the plan.
training expense. So, that's the plan.
Spend a lot of money training but make more and more. Uh if we if we weren't continuing to grow our training costs by so much, uh we would be
profitable way way earlier. Um but the bet we're making is to invest very aggressively in training these big models. The whole world is wondering um
models. The whole world is wondering um how your revenue will line up with the spend. Uh the question's been asked if
spend. Uh the question's been asked if the trajectory is to hit 20 billion in revenue this year and the the spend commitment is 1.4 trillion. Um so I
think it would be great just over a very long period.
>> Yeah., Over, and, that's, why, I, wanted, to bring it up to you. I think it would be great, to, just, lay it, out, for, everyone once and for all how those numbers are going to work. It's it's very hard to
like really I I I find that one thing I certainly can't do it and very few people I've ever met can do it. You
know, you can like you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on like for whatever reason there were a
lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth
heads. Modeling exponential growth doesn't seem to be one of them. Um so
the thing we believe is that we can stay on a very steep growth curve of revenue for quite a while and everything we see right now
continues to indicate that we cannot do it if we don't have the compute. uh
again we're so compute constrained uh and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can't monetize on a you
know profitable per unit of compute basis be very reasonable to say okay this is like a little how's this all going to work but we've penciled this out a bunch of ways
uh we will of course also get more efficient uh on like a flops per dollar basis as you know all of the work we've been doing to make comput cheaper comes
to pass. Um, but
to pass. Um, but we see this consumer growth, we see this enterprise growth. There's a whole bunch
enterprise growth. There's a whole bunch of new kinds of businesses that have we haven't even launched yet but will. Um, but compute is really the
will. Um, but compute is really the lifeblood that enables all of this. So
we, you know, there's like checkpoints along the way and if we're a little bit wrong about our timing or math, we can we have some flexibility, but
we have always been in a comput deficit.
It has always constrained what we're able to do. Uh I unfortunately think that will always be the case, but I wish it were less the case and I'd like to get it to be less of the case over time.
Uh because I think there's so many great products and services that we can deliver and it'll be a great business.
Okay. So, it's effectively training costs go down >> as, a, percentage, go, up, overall., But, yeah, >> and, then, your, expectation, is, through things like this this enterprise push
through things like people being willing uh to pay for chat GPT through the API OpenAI will be able to grow revenue enough to pay for it with revenue.
>> Yeah,, that, is, the, plan.
>> Now,, I, think, the, thing, so, the, market's been kind of losing its mind over this um recently. I think the thing that has
um recently. I think the thing that has spooked the market has been the debt has entered uh into this equation. And the
idea around debt is you take debt out when there's something that's predictable. Um and then companies will
predictable. Um and then companies will take the debt out, they'll build and they'll have predictable revenue.
>> But, it's, it's, the, this, is, a, new category. It's it is unpredictable. Um
category. It's it is unpredictable. Um
is is, that, how, do, you, think, about, the fact that that debt has entered the picture here? So, first of all, I think
picture here? So, first of all, I think the market more lost its mind when earlier this year, you know, we would like meet with some company and that company's stock would go up 20% or 15%
the next day. That was crazy.
>> That, felt, really, unhealthy., Um,, I'm actually happy that there's like a little bit more skepticism and rationality in the market now cuz uh it felt to me like we were just totally
heading towards a very unstable bubble and now I think people are some degree of discipline. So I actually think
of discipline. So I actually think things are I think people went crazy earlier and now people are being more rational on the debt front. I I think we
do kind of we know that if we build infrastructure we the industry someone's going to get value out of it. And it's
still it's still totally early. I agree
with you. But I don't think anyone's still questioning there's not going to be value from like AI infrastructure.
And so I think it is reasonable for debt to enter this market. I think there will also be other kinds of financial instruments. I suspect we'll see some
instruments. I suspect we'll see some unreasonable ones as people really you know innovate about how to finance this sort of stuff. But you know like lending companies money to build data centers
that that seems fine to me. I think the the fear is that um if things don't continue at pace like here's one scenario um and you'll probably disagree with this but like the model progress
saturates uh then the the infrastructure becomes worth less than the anticipated value was and then yes those data centers will be worth something to someone but it could be that they get
liquidated and someone buys them at a discount. Yeah. And and I do suspect by
discount. Yeah. And and I do suspect by the way there will be some like booms and busts along the way. These things
are never a perfectly smooth line. Um
first of all, it seems very clear to me and this is like a thing I happily would bet the company on, that the models are going to get much much better. We have
like a pretty good window into this.
We're very confident about that. Even if
they did not, I think the there's like a lot of inertia in the world. It takes a while to figure out
world. It takes a while to figure out how to adapt to things. The overhang of the economic value that I believe 5.2 2 represents relative to what the world has figured out how to get out of it so
far is so huge that even if you froze the model at 5.2 to how much more like value can you create and thus revenue can you drive? I bet a huge amount. In
fact, you didn't ask this, but if I can go on a rant for a second. Um
we used to talk a lot about this 2x2 matrix of short timelines long timelines, slow takeoff, fast takeoff, and where we felt at different times the kind of probability was
shifting, and that that was going to be you could kind of understand a lot of the decisions and strategy that the world should optimize for based off of where you were going to be on that 2x
two matrix. Um
two matrix. Um there's like a Z-axis in my head in my picture of this that's emerged, which is small overhang, big overhang. And
I I kind of thought that I guess I didn't think about that hard but uh like my retro on this is I must have assumed that the overhang was not going to be that massive that if the
models had a lot of value in them, the world was pretty quickly going to figure out how to deploy that. But it looks to me now like the overhang is going to be massive in most of the world. You'll
have these like areas like you know some some set of coders that'll get massively more productive by adopting these tools.
But on the whole you have this crazy smart model that to be perfectly honest most people are still asking this similar questions they did in the GPD4 realm. Scientists
different coders different maybe knowledge work is going to get different but but there is a huge overhang and that has a bunch of very strange consequences for the world. I we have
not wrapped our head around all the ways that's going to play out yet, but is very much not what I would have expected a, few, years, ago., I, have a, question, for you about this uh capability overhang.
Basically, the models can do a lot more than they've been doing. Um I I'm trying to figure out how um the models can be that much better than they're being used for, but a lot of businesses when they
try to implement them, they're not getting a return on their investment.
>> Um, or, at least, that's, what, they, tell MIT. I'm not sure quite how to think
MIT. I'm not sure quite how to think about that because we hear all these businesses saying, you know, if you 10x the price of GPT 5.2, we would still pay for it. Like you're hugely underpricing
for it. Like you're hugely underpricing this, we're getting all this value out of it.
>> Um, so I don't that doesn't seem right to me. Certainly, if you talk about like
me. Certainly, if you talk about like what coders say, they're like, "This is you know, I'd pay 100 times the price or whatever." Um
whatever." Um >> just, be, bureaucracy, that's, messing things up. Let's say you believe the GDP
things up. Let's say you believe the GDP valve numbers and maybe you don't for good reason maybe they're wrong but let let's say it were true and for kind of these wellsp specified not super long
duration knowledge work tasks seven out of 10 times you would be as happy or happier with the 5.2 output.
You should then be using that a lot. And
yet it takes people so long to change their workflow. are so used to asking
their workflow. are so used to asking the junior analyst to make a deck or whatever that they're going to like it just that's stickier than I thought it
was. You know, I still kind of run my
was. You know, I still kind of run my workflow in very much the same way.
Although I know that I could be using AI much more than I am. Yep. All right, we got 10 minutes left. I got Wow, that was quick. I got four questions. Uh let's
quick. I got four questions. Uh let's
see if we can lightning round uh through them. So, uh, the device that you're
them. So, uh, the device that you're working on. We'll be back with OpenAI
working on. We'll be back with OpenAI CEO Sam Alman right after this. Um, what
I've heard, phone size, no screen. Um
why couldn't it be an app if it's the phone if it's the phone without a screen? First, we're going to do a f a
screen? First, we're going to do a f a small family of devices. It will not be a single device. uh there will be over time a this is this is not speculation so I'm
may try not to be totally wrong but I think there will be a shift over time to the way people use computers where they go from a sort of
dumb reactive thing to a very smart proactive thing that is understanding your whole life your context everything going on around you very aware of
the people around you physically or close to you via a computer that you're working with. And I don't think current
working with. And I don't think current devices are well suited to that kind of world. And I am a big believer that we like we work at the
limit of our devices. you know, you have that computer and it has a bunch of design choices. Like it could be open or
design choices. Like it could be open or closed, but it can't be, you know there's not like a okay, pay attention to this interview, but be closed and like whisper in my ear if I forget to
ask Sam a question or whatever. Um
>> maybe, that, would, be, helpful., And, there's like, you know, there's like a screen and that like limits you to the kind of same way we've had graphical user interfaces working for many
decades. And there's
decades. And there's you know, a keyboard that was built to like slow down how fast you could get information into it. And these have just been unquestioned assumptions for a long
time, but they worked. And then this totally new thing came along and it opens up a possibility space. But
I don't think the current form factor of devices is the optimal fit. It'd be very odd if it were for this like incredible new affordance we have. Oh man, we could talk for an hour about this, but um
let's move on to the next one. Cloud.
You've talked about building a cloud. Um
here's a an email we got from a listener. At my company, we're moving
listener. At my company, we're moving off Azure and directly integrating with OpenAI to power our AI experiences in the product. The focus is to insert a
the product. The focus is to insert a stream of trillions of tokens powering AI experiences through the stack. Is is
that the plan to build a big big cloud business in that in that way?
>> First, of, all,, trillions, of, tokens,, a, lot of tokens. And if you know you asked
of tokens. And if you know you asked about the need for compute and our enterprise strategy like >> enterprises, have, been, clear, with, us about how many tokens they'd like to buy
from us and we are going to again fail in 2026 to meet demand but the strategy is companies most companies seem to want to come to a
company like us and say I'd like the name of my company with AI. I need an API customized for my company. I need
Chach Enterprise customized for my company. I need a platform that can like
company. I need a platform that can like run all these agents that I can trust my data on. I need the ability to get
data on. I need the ability to get trillions of tokens into my product. I
need the ability to have all my internal processes be more efficient and we don't currently have like a great all-in-one offering for them and we'd
like to make that.
>> Is, your, ambition, to, put, it, up, there, with the AWS and Ashers of the world?
>> Uh, I, think, it's, I, think, it's, a, different kind of thing than those. like I don't I don't really have an ambition to go offer whatever all the services you have
to offer to host a website or I don't even know but uh but I I I think the concept yeah my my guess is that people will continue to have their
call it web cloud and then I think there will be this other thing where like a company will be like I need an AI platform for everything that I want to do internally the service I want to offer whatever
and you know like it does kind of live on the physical hardware in some sense but I think it'll be a fairly different product offering. Uh let's talk about
product offering. Uh let's talk about discovery quickly. Um you've said
discovery quickly. Um you've said something that's been really interesting to me uh you that you think that the models or maybe it's people working with models or the models will make small discoveries next year and big ones
within five. Is that the models? Is it
within five. Is that the models? Is it
people working alongside them? And what
makes you confident that that's going to happen? Yeah, people using the models
happen? Yeah, people using the models like the the models that can like figure out their own questions to ask that does feel further off. But if the world is benefiting from new knowledge like we
should be very thrilled and you know like I think the the whole course of human progress has been that we build these better tools and then people use them to do more things and then out of
that process they build more tools and it's this like scaffolding that we climb like layer by layer, generation by generation, discovery by discovery and the fact that a human's asking the question I think in no way diminishes
the value of the tool. All right. So, I
I think it's great. I'm all happy. Um
I at the beginning of this year, I thought the small discoveries were going to start in 2026. They started in 2025 in late 2025. Again, these are very small. I really don't want to overstate
small. I really don't want to overstate them but anything is feels qualitatively to me very different than nothing. And certainly in
the when we launched three years ago that model was not going to make any new contribution to the total of human knowledge. um
knowledge. um what it looks like from here to five years from now. This journey to big discoveries, I suspect it's just like like the normal hill climb of AI. It
just gets like a little bit better every quarter and then all of a sudden we're like, whoa, humans augmented by these models are doing things that humans 5 years ago just absolutely couldn't do.
And you know, whether we mostly attribute that to smarter humans or smarter models, as long as we get the scientific discoveries, I'm very happy either way.
IPO next year. I don't know. Do you want to be a public company?
>> Um,, you, seem, like, you, can, operate private for a long time. Would you go before you needed to terms of funding?
>> There's, like, a, whole, bunch, of, things, at play here. I do think it's cool that
play here. I do think it's cool that public markets get to participate in value creation and you know in some
sense we will be very late to go public if you look at any previous company. Um
it's wonderful to be a private company.
Uh we need lots of capital. Uh
we're going to you know cross all of the sort of shareholder limits and stuff at some point. So
some point. So am I excited to be a public company CEO? 0%. Um, am I excited for Open Eye to be a public
company? In some ways, I am. And in some
company? In some ways, I am. And in some ways, I think it'll be really annoying.
I listened to your Theo van interview very closely. Uh, great interview.
very closely. Uh, great interview.
>> He, was, really, cool.
>> Theo, really, knows, what, he's, talking.
He's >> He, did, his, homework., You, told, him,, this was right before GPT5 came out, that GPT5 is smarter than us in almost every
way. Uh, I I thought that that was the
way. Uh, I I thought that that was the definition of AGI. Does is that isn't that AGI? And and if not, has the term
that AGI? And and if not, has the term become somewhat meaningless? These
models are clearly extremely smart on a sort of raw horsepower basis. You know
there's all this stuff on the last couple of days about GPT 5.2 who has an IQ of 147 or 144 or 151 or whatever it is. It's like, you know, depending on
is. It's like, you know, depending on whose test it's like it's some high number and you have like a lot of experts in their field saying it can do these amazing things and it's
like contributing it's making it more effective. You have the GDP things we
effective. You have the GDP things we talked about. One thing you don't have
talked about. One thing you don't have is the ability for the model to not be able to do something today, realize it can't go off and figure out how to learn to
get good at that thing, learn to understand it, and when you come back the next day, it gets it right. And that
kind of continuous learning like toddlers can do it. It does seem to me like an important part of what we need to build. Now, can you have something
to build. Now, can you have something that most people would consider an AGI without that? I would say clear. I mean
without that? I would say clear. I mean
there's a lot of people that would say we're at AGI with our current models.
Um, I think almost everyone would agree that if we were at the current level of intelligence and had that other thing
it would clearly be very AGI like. Um
but maybe most of the world will say "Okay, fine. Even without that, like
"Okay, fine. Even without that, like it's doing most knowledge tasks that matter. um smarter than us in mo most of
matter. um smarter than us in mo most of us in most ways. We're at AGI. You know
it's discovering small piece of new science. We're at AGI. What I think this
science. We're at AGI. What I think this means is that the term although it's been very hard for all of us to stop using is very underdefined, right?
I I have a I have a a can like one thing I would love since we got wrong with AGI. We never
define that that you know the new term everyone's focused about is when we get to super intelligence. Um so my proposal is that we agree that you know AGI kind
of went whooshing by. It was didn't change the world that much or it will in the long term but okay fine we've built AGIs at some point you know we're in this like fuzzy period where some people
think we have and some people think we have and more people will think we have and and then we'll say okay what's next?
Um, a candidate definition for super intelligence is when a system can do a better job being president of United States, CEO of a major company, you
know, running a very large scientific lab than any person can even with the assistance of AI.
>> Okay, >> I, think, this, was, an, interesting, thing about what happened with chess where chess got it could be humans. I remember
this very vividly. uh that deep blue thing and then there was a period of time where a human and the AI together were better
than an AI by itself and then the person was just making it worse and the smartest thing was the unaded AI that didn't have the human like not understanding its its great
intelligence. Um
intelligence. Um I think something like that is like an interesting framework for super intelligence. I think it's like a long
intelligence. I think it's like a long way off, but I would love to have like a cleaner definition this time around.
>> Well,, Sam,, look,, I, I, have, uh, been, in your products uh using them daily for 3 years. Um
years. Um >> definitely, gotten, a, lot, better., Can't
even imagine where they go from here.
>> We'll, we'll, try, to, keep, getting, them better fast.
>> Okay., And, uh, this, is, our, second, time speaking and I appreciate how open you've been uh both times. So, thank you for your time.
>> Thank, you, everybody, for, listening, and watching. If you're here for the first
watching. If you're here for the first time, please hit follow or subscribe. We
have lots of great interviews on the feed and more on the way. This past
year, we've had Google DeepMind CEO Demisabus on twice, including one with Google founder Sergey Brin. We've also
had Dario Ammoday, the CEO of Anthropic.
And we have plenty of big interviews coming up in 2026. Thanks again, and we'll see you next time on Big Technology Podcast.
Loading video analysis...