State of the AI industry — the OpenAI Podcast Ep. 12
By OpenAI
Summary
## Key takeaways - **Agents Mature in 2026**: Agents, especially multi-agent systems, will mature to the point of having real visible impact, running full enterprise tasks like ERP reconciliation, accruals, and contract tracking daily, and consumer tasks like trip planning integrating food preferences, reservations, airlines, and calendars. [00:48], [01:23] - **Compute Directly Tracks Revenue**: OpenAI's compute scaled from 200 megawatts at $2B ARR in 2023, to 600 megawatts at $6B in 2024, to 2 gigawatts at over $20B last year, showing a strong correlation where more compute drives more revenue. [12:29], [12:41] - **No AI Bubble in API Calls**: Bubbles should be measured by API calls or internet traffic, not stock prices; during the dot-com era, internet traffic showed no bubble despite stock gyrations, and AI API calls will similarly reflect real demand. [19:16], [20:13] - **AI Automates Finance Drudgery**: AI pulls contracts overnight into a database, identifies non-standard terms, suggests revenue recognition, and provides business insights like coaching salespeople or detecting model shifts, replacing mundane entry-level work. [21:49], [23:04] - **230M Weekly Health Queries**: 230 million people ask ChatGPT health questions weekly, 66% of US physicians use it daily, augmenting doctors like suggesting rare diseases outside pattern recognition such as malaria in a Scottish ER. [08:39], [09:36] - **Productivity Skyrockets with AI**: Slash runs $150M ARR with one controller using AI ERP replacing NetSuite; one SDR supervises what 10 did; top quartile firms see 27-33% productivity gains, shifting hires to growth roles. [24:39], [26:24]
Topics Covered
- Agents Mature in 2026
- AI Augments Doctors
- Compute Tracks Revenue
- No Bubble in API Calls
- AI Replaces Drudgery
Full Transcript
Hello, I'm Andrew Mayne, and this is the OpenAI Podcast. Today, our guests are Sarah Fryer, CFO of OpenAI, and legendary investor Vinod Khosla of Khosla Ventures. In
this discussion, we're going to talk about the state of the AI ecosystem, whether or not we're in a bubble, and how startups and investors can succeed as AI progresses.
Unlike something like Netflix, where they're running so many hours in a day, I think of it much more like infrastructure, like electricity. Demand is limited not by anything other than availability of compute today. I think the conversation we need to have is what will people do? 2025
was about agents and vibe coding. Now it's 2026. What's the story of 2026? I
think... We matured in VIBE coding in 2025. I don't think we've matured in agents. So agents, especially multi-agent systems, will mature to
agents. So agents, especially multi-agent systems, will mature to the point of having real visible impact. Whether you're in enterprise and you have multi-agent systems doing full tasks, like running an ERP system for you, You know, doing all the reconciliation every day, accruals every
day, tracking contracts every day. I think that on the enterprise side. But today, on the consumer side, you know, it's still a
side. But today, on the consumer side, you know, it's still a hassle to plan a trip. That's a multi-agentic thing that looks across a lot of different things from your food preferences to the restaurant preservation to airline schedules to your personal calendar. Those will start to mature, I think, a year from
now. So I'm pretty excited about that. I think models in robotics
now. So I'm pretty excited about that. I think models in robotics and real world models that go well beyond robotics, like general intuitions, will all start to happen in the next year. So
I think that those are areas to look for. There's usual functions like memory in LLMs, continual learning in LLMs, reduction of the impact of hallucinations. Those are all areas I could go on. There's
half a dozen areas in which AI doesn't do as well today that will start to be addressed. Yeah. And I think at its baseline, what Vinod is saying is 26 is the beginning of closing this capability gap. So what we know is we've handed people massive intelligence, right? We've handed them the keys to the Ferrari, but
they are only learning how to take it out on the road for the first time. We need to give consumers more and more easy access. easy
time. We need to give consumers more and more easy access. easy
ways to go from Chad GPT is just a chat bot call and response. Most
people use it today just to ask questions. But how do we take it towards being a true task worker that books that trip for them or helps them get a second opinion on what they just heard from their doctor or enables them to create a menu for their diabetic child, right? How do we help them
really move from simple things questions into actual outcomes that make my life better. And then on the enterprise side, it's that same continuum. How do we
life better. And then on the enterprise side, it's that same continuum. How do we close the capability gap? One of the things we know from our state of the enterprise AI and the enterprise report that our chief economist put out at the end of last year is on the frontier versus just even the median corporation. The average
number of messages or the median is about six X, right? which will tell you that's 6x the usage from a company that's already on the frontier. And we know that frontier isn't even pushed to its max. So for us, it's this focus of how do we help consumers move along that continuum to true agentic task working? And
then for enterprises, how do we create a much more sophisticated, vertically specialized outcome online? for enterprises that allows them to go from maybe a very simple chat to PT implementation the whole way to something that's transforming the most important part of their business. For a healthcare provider, it might be their drug discovery process. For
a hospital, it might be the time to admit a patient to get that patient back into the community. For a really large retailer, it might be just larger basket sizes, higher conversion rates, and much happier customers. So it's the basics of closing that capability gap? So I might add one other perspective. We've talked
about the number of areas in which the technology will advance and capability will advance.
I would venture to guess today, of the people using AI, whether it's personal or enterprise, some single digit percentage are even using 30% of the capability of the AI. So this percentage of people who are using 30% or 50%, let alone 80% of the AI's capabilities will keep increasing.
I think that's a 10-year journey before people learn to use AI. I've seen this.
Some people, kind of pundits, confuse adoption curves for capability curves. And that's come up where you've seen people- So that's the point I'm making. And it's a force multiplier because today we have over 800 million using- chat GPT today, 800 million consumers weekly using. But that number should be in the billions. And then what percentage use are
using. But that number should be in the billions. And then what percentage use are they using it for? It's like we've just turned electricity on in the home. We've
wired up the home and they've turned on the lights, but they have no idea that they could now heat their home. They could cook. They could curl their hair.
There's so many things you now can do. An analogy I've used is that Email didn't really get much better between 1990 and the year 2000. Neither did mobile, but usage went way up. And the problem wasn't like, well, we need better email. We
need more, better mobile. It's like people need to learn all the things they could use it for. Right. Yeah. And in a more sophisticated way, like mobile is always one that's interesting to me because when mobile took off, people just took their desktop websites and turned them into mobile and they were really hard to scroll, but I guess you at least had them in your pocket. But then you realized you had
a GPS and So now you could have Uber and now you could do things with location or you had a camera at your fingertips. Okay. So now, yeah, I can take photographs of all my friends, but I can also snap, you know, a check and deposit it into my bank account. Although we should fix the whole paper check thing, but that's an aside. It still seems like I can just take a
photo of this and now I get money in my bank account. Yeah. But you
know, those, those, that all existed in, And the minute mobile was available to us, but just the ability for human ingenuity to come to work on it. So I
think you're right. I don't even know if we need more intelligence than we have today to vastly increase outcomes. But of course, the models are gonna keep getting more intelligent as well. You mentioned health, and that's one of the really kind of high stakes things we think about when it comes to just probably the most important thing.
And it's kind of fascinating to think about the just, you know, A few years ago, we got ChatGPT and we're using it for very simple applications, and now we're trusting with HIPAA-compliant data. Do you look at that as sort of a marker of how fast or how well things have been accelerating? Are there other ones like that you think about to say, OK, now we know we're at some new level? Health
is clearly one of those areas I've long believed revolutionized health by making expertise be a commodity in all areas of health. The problem with health is regulatory. So first, there's constraints on what AI can do.
regulatory. So first, there's constraints on what AI can do.
And AI can't legally write a prescription, even if it's better than human beings at writing a prescription. That is not only the FDA, but it's actually beyond the FDA into the American Medical Association institutionally controls that function. So they will be incumbent resistance in a lot of
areas. I think we can talk about it if you like. But diagnosing
areas. I think we can talk about it if you like. But diagnosing
is still a constraint because the FDA controls that. There's no AI approved as a medical device yet. So that all, fortunately, this administration is doing a very good job of moving quickly, and taking the appropriate level of risk. So I'm pretty pleased to see what's happening there. On the health front, we
risk. So I'm pretty pleased to see what's happening there. On the health front, we see in our data 230 million people every week ask ChatGPT a health question. 66% of US physicians say they use chat GPT
question. 66% of US physicians say they use chat GPT in their daily work. I'll tell you at a personal level, my brother is an HDU doctor in the UK. So his job is right, you hit the ER, they don't know how to triage you. So they send you to him, you kind of don't want to show up to him. He's expected to be very good. He's very
good at what he does. But it means you're not in good shape. But he's
expect to have an almost an encyclopedic knowledge of every disease that ever existed. So
I always give the example, he works in Aberdeen in Scotland. If you showed up with malaria, he will not think of that. That is not in his pattern recognition.
And yet that could have happened. I don't know. You went on vacation somewhere. You
got bitten by a mosquito. Boom, you're showing up in an ER room in Aberdeen.
What ChatGPT can do or what the model can do is really act as a great augmentation to the doctor, which is why I think 66% of them are using it. And that number is only growing, right? You know, it's probably already much higher.
it. And that number is only growing, right? You know, it's probably already much higher.
And so I think it's just a great example of where something like health, we're getting the benefit of our doctors being able to have always the latest research in front of them, always the latest known interactions, say, between someone's drug regime and what they're living through and experiencing as individuals. but it also puts some independence back
into consumers' hands. So now I get the opportunity to, ahead of time, do some research on what my symptoms might be saying so I can have a much more educated conversation with my doctor. It allows me to maybe get a second opinion or know that I want to go ask for a second opinion. It also, we go
very fast to these extreme places, but just to even things like, hey, I've got 20 minutes a day to exercise. I know I'm suffering from type 1 diabetes. What
could I do in 20 minutes? Or my daughter has an interesting issue with the food she eats. And so it used to be a super just frustrating thing to go to a restaurant even because we'd have to almost ask the server so many questions. And now we can photograph a menu, chat suggests what are likely the best
questions. And now we can photograph a menu, chat suggests what are likely the best dishes for her to order. And then we can have a bit more of a terser conversation but a bit more productive on what's going to work. And it has just changed how we think about just eating, takes it away from all about the food to why we're going out for dinner together. And so I think there are
all these just examples of something like health. It's already happening and it's going to keep getting better and better. And then to Daphneau's point, I think regulatory environment is going to have to catch up. It's it's no matter what kind of system you're under, the cost of medical care is exceeding the GDP of every country, the rate at which it increases. And it seems like we needed AI, we needed it now,
and it can be helpful. And as you pointed out, it's the first time the cost of medical intelligence has dropped year over year. But that comes with a lot of demand for compute. And we have a lot more questions that we wanna have answered. And certainly people can see the need for more compute. But the
answered. And certainly people can see the need for more compute. But the
scale and scope at which OpenAI is investing in compute is incredibly huge. We're talking
numbers that are just really hard to fathom. How does OpenAI determine that need?
You know, what are the metrics you're looking at to think that like, yes, we need to spend this much? So first of all, we are trying to make sure we stay investing in compute to match the pace of our revenue. And we've seen a really strong correlation between in period compute in period revenue. I'll give you an example. If you just go back in 23, 24 and 25, our compute was
example. If you just go back in 23, 24 and 25, our compute was 200 megawatts, 600 megawatts when it ended last year at two gigawatts. Against
that, and really easy because the numbers match up. We exited 23 at 2 billion in ARR, so 200 megawatts, 2 billion. We exited 24 at 6 billion, so 6 billion, 600 megawatts. And we exited last year a little over 20 billion, 20 billion, 2 gigawatts. Actually, it's been accelerating. So that's just even if you look at the
2 gigawatts. Actually, it's been accelerating. So that's just even if you look at the slope of the line, it says more compute, more revenue. Now, there is definitely a timing mismatch because I have to make decisions today about making sure we have compute in not even 26 or 27, but 28, 29, and 30. Because if I
don't put in orders today and don't give the signal to create data centers, it won't be there, right? Today we feel... absolutely constrained on compute. There are many more products that we could launch, many more models that we would train, many more multimodality things we would explore if we had more compute today. So for example, even in
the last year, I think the overall hardware investments globally has gone up by something like $220 billion. That's just how much actual spending has gone up. If you
look at chips, chip forecasts have gone up similarly about $334 billion So it's not just OpenAI. The signal from the whole environment is AI is real.
We are in a paradigm shift. We need to invest to give people the intelligence they need to do all the things we just talked about, for example. So back
inside of OpenAI, we do spend a lot of time going very deep on what is our demand signal in consumer, in enterprise, in developers. We think
about... What's the mosaic first at the base? Like on an infrastructure layer, how do we create max optionality? So we want to be multi-cloud, multi-chip, and that gives us an interesting layer at the infrastructure layer. One tick up at the product layer. We also want to become more multi-dimensional. So we used to just
be One product, ChatGPT. Today we are ChatGPT for consumer with all of the blades inside it, healthcare and so on. ChatGPT for work, but we also have Sora as a new platform. We have some of our transformational research projects.
One take up. We also then have a business model ecosystem that's becoming much more multidimensional. Began with a single subscription because we'd launched ChatGPT and we needed a way
multidimensional. Began with a single subscription because we'd launched ChatGPT and we needed a way to pay for the compute. We now have multiple price points. First ChatGPT subscriber, by the way. I love you for that. Multiple subscriptions. We went to the enterprise and
the way. I love you for that. Multiple subscriptions. We went to the enterprise and had SaaS-based pricing. We have credit-based pricing now for places where high value is being found. People want to pay more to get more. We're beginning to think about things
found. People want to pay more to get more. We're beginning to think about things like commerce and ads. And then of course, longer term, I like models like for example, would we do licensing models to really align, let's say in drug discovery, if we licensed our technology, you have a breakthrough that drug takes off,
and we get a licensed portion of all its sales. It's great alignment for us with our customer. So I'll kind of If you think about those three tiers, I actually think of it like a Rubik's cube. So we went from a single block, one CSP, Microsoft, one chip, one product, one business model, to now a whole three dimensional cube. And one of the things I love about a Rubik's Cube,
I'm probably not getting the number exactly right, but I think it has 43 quintillion different states it can be in. It always blew my mind when I was in university. So now just think about that cube spinning. So we pick a low latency
university. So now just think about that cube spinning. So we pick a low latency chip going alongside something like coding that's 5x the pace that people expect. We can charge a high end subscription for that. So it's almost like you
expect. We can charge a high end subscription for that. So it's almost like you line up the cube and you get three colors on one side. We could spin the cube again and say low latency chip, faster image gen, more free users come in, but that creates more inventory for ultimately perhaps an ads platform. So
you can start to see how the goal in the last 12 months has been creating more and more strategic options that allow me to keep paying for the compute we need to really achieve our mission, AGI for the benefit of humanity. So,
you know, the way to simplify that is demand is limited, not by anything other than availability of compute today. Whether it's Sora or more broadly. And
then there's price elasticity, where demand is infinite for compute. So I think that's the way to think about it. It just, we
compute. So I think that's the way to think about it. It just, we haven't even started to exercise the price elasticity lever. It's just we can't fulfill demand. And it's limited by compute. So all the people
fulfill demand. And it's limited by compute. So all the people talking about bubbles and things, I think are on the wrong track. They have no sense of how large this changes and how much more demand elasticity there's a need for API calls. As
one of OpenAI's earliest investors, You made a bet early on. You saw where this was headed, but you saw the dot-com bubble. You watched what happened there, but you've also seen other things, the mobile revolution. You've seen this happen with other areas. And
you mentioned the term broad. And is that sort of where your conviction comes from is just how many different areas it touches? Yeah. When we invested, we had one simple metric. There was no projections to look at, no product plans to look at,
simple metric. There was no projections to look at, no product plans to look at, no chat GPT to look at. It was very simply the idea, if we develop anywhere near close to human intelligence, let alone supersede human intelligence, its impact is going to be huge. So it was this hand-wavy
approach, like, The consequences of success are really going to be consequential.
So why not try that?
There's also this funny notion of bubble. People equate bubble to stock prices, which has nothing to do with anything other than fear and greed among investors. So
I always look at bubbles should be measured by the number of API calls.
or in the dot-com bubble, which people refer to, it should be amount of internet traffic, not by what happened to stock prices because somebody got overexcited or underexcited, and in one day they can go from loving NVIDIA to hating NVIDIA because it's overvalued. Those gyrations
aren't reality. The reality is the underlying number of API calls. If you
look at internet traffic during the dot-com bubble, prices may have gone up violently and gone down violently. There's no bubble detectable in internet traffic. I would almost guarantee you, you won't see the bubble in number of API calls. And if that's
your fundamental metric of what's the real use of AI, usefulness of AI, demand for AI, you're not going to see a bubble in API calls. What
Wall Street tends to do with it, I don't really care. I think it's mostly irrelevant. Great for press articles because press has to fill their column inches, but
irrelevant. Great for press articles because press has to fill their column inches, but it's not reality. So prices of things aren't reality, or stock prices, private company valuation. The reality is what's the actual demand for AI, which is the number of API calls. Right. And I think
if I hark back to that moment where you were looking at 1999, the value people were getting from the Internet at the time was actually very, it was so young, so nascent that you couldn't really see how it was changing their lives.
I do think that with AI, it's happened so fast, that change. It's very real.
Like as a CFO, forget about being the CFO of OpenAI, but as a CFO, what I see happening in my organization is truly taking tasks that previously I would have kept having to add more and more people doing fairly mundane things. Like let's take something like revenue management. So in a team that does
things. Like let's take something like revenue management. So in a team that does revenue management, one of the things they do every day is they have to download all the contracts that we signed the day before or through the week, and they have to read all of those contracts to make sure there's no terms sitting in
it that are unexpected, that are effectively non-standard terms. Because a non-standard term means that there could be a revenue recognition change that has to happen. And that's a very big deal for a finance team. That's the number one thing usually your auditors come in to audit you on. The pace at which we are growing, right, the number of contracts every day is going up in multiples. So my only choice in
a pre-AI world would have been hire more people. And imagine what those people's jobs are like. You come to work every day and you read a contract and then you read the next one and the next one. It is so mundane and such drudgery. And it's not why people, you know, went to school and learned about
such drudgery. And it's not why people, you know, went to school and learned about the accounting field or thought about being a finance professor. But that's kind of the job we hand them as an entry level job. Today, using our own tools here at OpenAI, I now have overnight, all of those contracts are pulled out of a
system. They are put into a tabular database, the Databricks database in our case.
system. They are put into a tabular database, the Databricks database in our case.
The agent or the intelligence is able to go through. It shows me exactly what is nonstandard and why. It suggests what therefore the rev rec is. but it
also suggests the insight, which is, should this term even be here? Did the salesperson just give away something they shouldn't have? In which case, I go and I coach them. Is it actually telling me something about my business that's starting to shift? In
them. Is it actually telling me something about my business that's starting to shift? In
which case, this non-standard term is actually should become a standard term. And I'm actually, what I'm experiencing is a shift in my business model, which might actually be a good thing. Or, Perhaps I want to find a different way to help get
good thing. Or, Perhaps I want to find a different way to help get the customer what they're looking for, the salesperson what they're looking for, but maintain my revenue recognition, my current business model. Right. So I know my more junior entry level people are over on the right of that discussion and they're kind of refinding the
job they loved. Mm hmm. That to me is why it's not a bubble because the value is real and tangible. It also means I probably can have a smaller team. I can have a much more high performing team, a much higher morale on
team. I can have a much more high performing team, a much higher morale on my team, better retention rates, right? All of these I can put into like numbers to say my business is now healthier. And I think that's the piece when the press is trying to lead with the bubble conversation or whatever. They just miss that we are investing with demand, if anything behind demand at the moment. A bubble to
me suggests you're investing ahead of demand and there's going to be a gap. And
you look at productivity numbers, they're going up in the companies that are adapting AI, especially the newer companies. set of tech-oriented companies. The numbers are just absolutely amazing. So one of my favorites is a little company called Slash,
absolutely amazing. So one of my favorites is a little company called Slash, about 150 million ARR. They have one person in account, only a controller, because they adapted an AI-oriented ERP system.
They replaced NetSuite with it, but it just... Amazing what they can do. And the
CEO was apologizing to me. He might have to hire a second person. And they're
moving really rapidly. I just saw a story. Somebody replaced 10 SDRs with one SDR in AI, essentially that the one SDR remaining supervises.
I've been hearing two stories about where instead of hiring somebody that's in an area that doesn't create growth, they can now then, when they hire, hire people that are creating a lot more growth for the company. And that's why you're seeing a lot of these tech companies just build so fast. You know that old phrase, the future is here now, but it's not evenly distributed? Yes. I see all these single
points of huge productivity gains and efficiency gains or agility gains, the ability to move faster. But very small percentage, of the people in the world or in the US or worldwide have adapted these or even know they exist. And so this issue back to
demand, I think these ideas, some of these examples will spread to everybody over time and you'll see an exponential growth of adoption of these technologies. That's why I don't think demand is the question. Yeah, the
note is absolutely spot on. I think McKinsey did a study that showed for companies that are more in the top quartile, their productivity is measured by any kind of financial metric you would pull is up. you know, in the 27 to 33%. Like
that's a really meaningful jump. I think where you were going is it doesn't just mean fewer employees overall. There's definitely a place to kind of shift people over into more growth oriented jobs. I was hiking this weekend with someone who runs a very large consulting company that you all would know of. And he was talking about how
his and his, what he thinks of more his backend systems, the leader there is now talking about her organization and as people plus agents. And she has a one to five ratio, one person to five agents. But on the front end, they're actually back at rehiring to grow because clients need more help now to think about deploying
AI. So it's actually shifting back, I would say, to the jobs people want to
AI. So it's actually shifting back, I would say, to the jobs people want to do. Not the jobs that maybe were just open to them because more and more
do. Not the jobs that maybe were just open to them because more and more of the world had become this kind of, you know, so much information that people were parsing it. Now we're finally back to a machine and agent intelligence parsing it.
I want to touch back on the consumer side. You mentioned ads. And certainly the argument we made that with ads you can increase the... benefits to people, you can provide more services, more AI, you can help pay for the compute and people get more out of those tiers with that. But that brings up the question though of trust. And when people think about AI initially even asking questions, people worried about what
trust. And when people think about AI initially even asking questions, people worried about what does chat cheapy to do with my information? Once you have ads in play, people worry about that because it's often just a big question of how does that affect the rest of the product and the org? Yeah, so I think you started in the right place, which is today, 95% of our users use our platform for free
on the consumer side. And that's absolutely where our mission is, right? AGI for the benefit of humanity, not the benefit of humanity who can pay, right? So access is very important. From an ad's perspective, I think number one, we have to just make
very important. From an ad's perspective, I think number one, we have to just make sure Everyone understands you're always going to get the best answer the model can provide you, not the paid for answer. And I think other platforms have fallen back into that where you're not sure is this a sponsored link or is this truly the best outcome? We have a North Star, which is that the model will always give
best outcome? We have a North Star, which is that the model will always give you the best answer. I think the second thing to understand is that there can be a lot of utility in ads. So we want to make sure people know when it is an ad that they're working with. But for example, if I do a search for a weekend getaway to pick your favorite city, I don't know, San
Diego, an ad for Airbnb might actually be very helpful. And you might even want to have a discussion with the ad or with the advertiser in that case in a chat GPT setting that's very rich and but you're clear that it's in an advertising setting. And I think this is where there has to be more innovation on
advertising setting. And I think this is where there has to be more innovation on what feels endemic to the platform, not just kind of the old world of stick banner ads on things. And I think the third and final thing for me is, again, there always has to be a tier where advertising doesn't exist. So we give
the user some choice and some control, but we're very mindful of your data. When
we released health, we were very clear your data is off to one side. It's
not being used to train on and so on. And I think we just need to keep giving users that kind of that trust is everything for open AI and that we're going to stand by those principles, even when it comes to things like ads. On the consumer side, Is it gonna be a world where you're gonna have
ads. On the consumer side, Is it gonna be a world where you're gonna have a lot of subscriptions to different AI services? I think you'll have every model. Most
people will have more than one subscription. Media is a good example. Most people have more than one subscription media. And so that's a good proxy for consumer behavior.
Different people will pick different choices, including free choices, which is ad-supported media too. So even the same services you can get or pay or for free. I think you'd see a wide range of diversity. How do
you think about though, the expense of going to a different platform? So I
like ChatGPT memory. I'm finding it more and more helpful because as I ask about one thing, it remembers something we talked about maybe weeks ago, months ago, Pulse, which is today not widely distributed, but it's the way I wake up in the morning now. It's amazing. It's so amazing. And when you start connecting it to things like
now. It's amazing. It's so amazing. And when you start connecting it to things like your calendar, so it's not just saying, you say are very interested in AI data centers, which clearly it must think I'm the most boring person on earth because this is what I see a lot of. But it also says, hey, on your calendar, you're going to be sitting down with Binod today. Remember a couple of these things.
It's so helpful. But if I am multi-homing, I'm losing the benefit, which is not the same as if I subscribe to the Wall Street Journal, The Economist, and The New York Times. They're not really losing out if I go read in other places in the same way, or I'm not losing out. Yeah. So I
do think memory is an important question, whether there'll be one per wear or more than one per wear of the models. On each model, there'll be multiple services that may offer different trade-offs. Yeah. So even... whether you're talking health or media, even on the OpenAI models, there's multiple
people providing services. So that's what I was thinking of multi-homing, but obviously I don't think OpenAI will be 100% of the market. I hope so.
I was gonna say, I hope so too. I'm okay with that. It's an interesting business model. I think it's hard for people to wrap their heads around because like
business model. I think it's hard for people to wrap their heads around because like Netflix is a great company. but there's only so many hours on the planet that people can watch Netflix, right? And mobile's great, right? I only need so many minutes of mobile per week or whatever to do that. With AI and intelligence, you can have more intelligence. I can buy more and get better answers and do this. And
I think that's a thing I'm still trying to wrap my head around about where that goes. The idea that like you start at like, you know, one level of
that goes. The idea that like you start at like, you know, one level of free, you know, use it for free, then you go to a smaller tier. And
then as it becomes more useful, you start increasing that. Where does it go? So
I think unlike something like Netflix, where they're running so many hours in the day, I think of it much more like infrastructure, like electricity. How much electricity do you use in the day? I don't know. I walked into a room today and there was a fan blowing. It was really nice. It cooled it down. There are lights on around us right now. There's so many. I charged my phone overnight and it
worked for me all day. So I think... that the state we live in today is much more, I call on chat GPT, I invoke it, as opposed to intelligence just being baked in. Like, I think this will be the big change over the next couple of years. You'll kind of look back almost, it'll feel a little toy-like that we used to do this thing. And instead, it just is everywhere around
us. And so... It's not really quite answering the question you're asking, but it's that
us. And so... It's not really quite answering the question you're asking, but it's that I don't get so caught up that there's only so many hours for people to do things because I feel like almost everything I do in life requires intelligence because I'm walking around, hopefully with some intelligence up here. And if I can get that augmented, I think it's going to surprise us. As we were talking before we got
started, you said about on your phone when you suddenly discovered you had a flashlight and a camera. It is... You say that and it's so obvious. And yet with ChatGPT, every time I discover kind of a, what feels like almost a slightly cute use case, I'm so blown away by it. Like yesterday morning, I do love The Economist. I wanted to read the editorial. I didn't really have a ton of time
Economist. I wanted to read the editorial. I didn't really have a ton of time because I was running upstairs to get ready. So I took a photograph of the editorial because they're very good. They put it on one page and I asked ChatGPT to read it to me and it did it. And I was like, oh my God. God, this is awesome. So I just think there are all these moments where
God. God, this is awesome. So I just think there are all these moments where we're just getting started. And multimodal, I think, is probably the biggest because phones taught us to talk with our thumbs. And I think this new world we're moving into, there's going to be new hardware that just really help us understand that we can talk, we can listen, we can see, we can write and do all of these
things in a very human way that we're just scratching the surface of. So let
me give you a different frame on that. I agree with all of that. If
you look at what we talked about the internet earlier and the bubble associated with it, but what the internet did is give you access to a lot more stuff, whether it was media, YouTube videos, or TikTok, or you name it, information of any sort.
But it's expanded it to the point where no human can actually use the internet fully. I think of AI as given you're limited to
fully. I think of AI as given you're limited to 8,000 some hours a day, some of which is meant for sleeping. It'll
make your time much more efficient. So the internet exploded information available to you. to the point where you couldn't use it. And I think what AI will do is filter it to make your every hour the most effective hour if you know how to use it. So intelligence will reduce the world to what
is most relevant to you personally. And I may have a different set of priorities than Sarah. So I think of intelligence as summarizing the world to the most relevant
than Sarah. So I think of intelligence as summarizing the world to the most relevant things for me and the most relevant things to her, which are different.
So I think that's where there's almost unlimited capacity for intelligence to be used to reduce information when the internet exploded information. Yeah,
yeah. We've talked a lot about the consumer side and it feels like OpenAI is very much winning the consumer side. Question comes up about enterprise and how is OpenAI going to compete and win in that area? So I think we're already winning in this area. What I see is, you know, 90% of corporations are
saying they either are using OpenAI or intend to use over the next 12 months, right? I think the second is Microsoft and Microsoft's using our technology. So I actually
right? I think the second is Microsoft and Microsoft's using our technology. So I actually think we have, this is where the consumer is a really potent part of the enterprise flywheel. So as I said earlier, when someone, you know, you Back in the
enterprise flywheel. So as I said earlier, when someone, you know, you Back in the day when you first started bringing your iPhone to work and corporates didn't want you to do that, you just discovered you can't say no to the tidal wave that is consumer preference. So something I'm already using that I've already got in my pocket and I get to work, my expectation is work is at least as good, if
not better. And so that's what's helped drive our actual enterprise business, the
not better. And so that's what's helped drive our actual enterprise business, the fastest company ever to get to one million businesses on a platform. And we did that in about a year and a half. Um, But where to from here? Because
clearly we're just scratching the surface. So some of it is certainly meeting customers in terms of their vertical so that we talk to them in their language. And we
learn this art of enterprise selling, which is let me not tell you all about my products, but let me understand your problem. Like what is your board forcing on you, Mr. and Mrs. CEO? What is the thing your customers most want that you can't deliver? Okay, let's start putting intelligence against that. We can then drop that into
can't deliver? Okay, let's start putting intelligence against that. We can then drop that into some light vertical specialization to quite heavy vertical specialization. Things
like RLing models that are very pertinent to a use case. Like let's say in an energy company, it might be really understanding that particular oil well or all the seismic data they have to say, what's the recovery we're going to get out of this gas field? Like that is deep specialization. And then I think it gets the
whole way to some of these big transformational research projects that we've begun where we're actually almost almost taking over someone's whole business and helping them rethink it in a smarter, faster, better way that ultimately drives their key business metrics. So it's a journey. I think most corporates have started with wall-to-wall chat GPT. That's an easy
journey. I think most corporates have started with wall-to-wall chat GPT. That's an easy starting point. They've done some coding and in many cases, a lot of coding. Like
starting point. They've done some coding and in many cases, a lot of coding. Like
when I talk to corporates, they're now CEOs are starting to say things like 60 percent of all my production code was built by, you know, an agent. And I'm
like, you didn't even know what, you know, production code meant 12 months ago. But
now you're saying that that's good because it means you're tracking it. But on agents, it's just starting. Like we only see about 14% of all kind of customers when you go out and just survey U.S. corporates are using something agentic today, 14% when I just explained what's happening in my finance organization. So I think we are just
getting going, but I couldn't be more excited about the opportunity. It's huge. Okay.
But if I'm a startup and I look at everything OpenAI is doing, I might be asking, is there room for me? What do I get to do? Look, models
will keep getting better and do more and more. But I do believe there's lots of room to build on top. You know, no one company can do everything on the planet. There's billions of people who are working whose
the planet. There's billions of people who are working whose job AI can help with. I don't think OpenAI will specialize in everyone. So I think the careful thing to do is be clear
everyone. So I think the careful thing to do is be clear where the models will go, OpenAI or others, and what they will be able to do. And how do you use that best to then specialize into a more interesting world? Like some sort of specialization where you add something
that's additional to the base models. And frankly, just intelligence isn't the only thing to provide a solution. There's lots of other stuff that goes around solution beyond intelligence. So I think there's lots of opportunity to build on top of these models. And the more powerful they get, the number of opportunities to add to
these models. And the more powerful they get, the number of opportunities to add to it dramatically increases. How do you think about, so I think a lot about use cases where there's already a lot of data that's being aggregated perhaps by that startup, by that company, that today, I think 95% of the world's
information actually sits behind corporate firewalls, university firewalls, and so on. So even though we talk about the vast training that's occurred, again, we're just getting going. But I think companies that have already built businesses that have aggregated that data have access to it.
And then on top of that, have managed... complex workflows. So I often give the example of our procurement system. Procurement system per se, not that complicated. But what it does very well is it understands things like delegation of authority. So it knows what the board has approved in terms of approval limits. So it knows that when this
software contract comes in, it's over X amount, so only I can approve it. Or
if it's beneath that, but it knows a VP can approve it. It doesn't know that Andrew's a VP, but it knows to touch the HR system and check what's his level. And so the whole procurement flow can happen in a way where I
his level. And so the whole procurement flow can happen in a way where I have compliance and governance and hopefully makes just the whole company run faster. Those are
places I get interested for startups. So where have you got access to unique data with a complex workflow? It feels like there's more of a moat around that, that we want to work alongside you. But the general purpose model is not going to do all of that itself. Yeah, no, I completely buy that. I think there's lots of opportunity. I've seen quite a few startups around just permissioning around data.
of opportunity. I've seen quite a few startups around just permissioning around data.
Like who can do access to what information. For example, I've seen a whole bunch of startups around customizing to each company the models for their history and their priorities. And the agent, the whole identity side of agents, I think we're just starting to understand both the risk that can happen when
you have agents talking to agents talking to agents, but then also how are you going to permission that and then start to think about it like agentic commerce, like the complexity that's coming is also quite big. So to suggest there's no more opportunity as a startup, I think it's never been probably more interesting or fun to be a startup. Yeah. I think there's more opportunities than they've ever been. What are you
a startup. Yeah. I think there's more opportunities than they've ever been. What are you looking for now? What gets you excited when you talk to a company? Well, the
hardest thing is great people, always. But I
think the other thing that has been in short supply is agency, where people sort of have the agency to make things happen. That's again, comes down to people, but there's so much opportunity, I think. I think
traditional things like knowing a space or experiencing space is much less relevant now.
It's more agency. We've not talked about the whole new world of robotics and real world models and all that. That's a whole space by itself that we probably don't have time for. Whoa. Do we? We've got time. I've got plenty of time. I'd
love it. I want to go there. Yeah, because we talked about where we're headed here. And you famously talked about kind of the role of 2050 and things are
here. And you famously talked about kind of the role of 2050 and things are moving fast. Models are getting faster and more capable. And where do you see things
moving fast. Models are getting faster and more capable. And where do you see things like robotics headed? Well, I think two years ago when I gave a talk at TED, I said the robotics business, both BiPIL and other robots, will be a larger business than in 15 years than the auto
industry is today. We think of auto industry as one of the larger businesses on the planet. And this other thing will be larger. I don't think there's
the planet. And this other thing will be larger. I don't think there's very many automotive companies who are thinking of the world that way. They're thinking about how to use a robot in their assembly line. Not that that business is larger than their current business. all driven by the intelligence of robots. So
massive opportunities for startups there. And we are seeing a lot of activities. Yeah.
And I think sometimes we underestimate. So when you think about robots in the home, right, people, very fertile area, no one's really had a breakthrough, though. There's so many different issues around the complexity. Actually, sometimes the more time I spend in AI, they actually, the more... respect I have for the human condition in a way, because our ability to move around the world and do, you know, if you watch like the
people in robotics getting so excited about a robot folding clothes, you know, perhaps my 18 year old, I'd be just as excited about, but for the average human, I assume they can fold clothes. But I think the hell of a box.
But you do get a little stuck in your head that they have to somehow be a human. But it turns out there may just be these breakthrough moments. Like
for example, companionship in the home, right? We have an aging population.
What's one of the biggest, you know, we talk about epidemics in the world. Loneliness
is probably one of the biggest epidemics. What does someone living alone, maybe has just lost a spouse, value most? Just someone to converse with in a way that feels intuitive and human. We see people using ChatGPT more and more for this conversation, but is there a humanoid-esque breakthrough where it turns out you don't need it to make coffee or full clothes or do the dishes? Although that would be good, too. But
it might just be something a little bit more simple that still adds a lot of value and is just the the first crawl of crawl, walk, run of this kind of future that Vinod is talking about where that whole complex is X times more valuable ever than we saw in automatoes. I think that it's interesting because we can sort of think of kind of like our present and put robots in
places and do things like that. It's really hard to think of when you really have extremely low cost labor manufacturing, et cetera, and then the world you can build from there because, you know, we can look at that's a good solution for now. But when the cost of building a wonderful state of the art assisted living
now. But when the cost of building a wonderful state of the art assisted living facility where you can put a bunch of people together, the cost drops. I think
that's the thing I have, the hardest problem is for me is to really think like, what does it really mean when you lower the cost? We've learned the cost of intelligence. What does it mean we really lower the cost of labor? Well, my
of intelligence. What does it mean we really lower the cost of labor? Well, my
personal view, sometime probably towards the end of the next decade, you'll see a massively deflationary economy because labor will be near free, expertise will be near free, Most functions will be almost zero cost. How it exactly plays out, a little hard to tell. How
purchasing power versus production of goods and services plays out. But I expect we'll see a hugely deflationary economy at a level people aren't planning on. So there's
social aspects of adaption of AI that hasn't been handled yet.
I think the conversation we need to have is what will people do? I get
asked that a lot. How will people make a living?
I think the minimum standard of living government's gonna show people is gonna be much, much higher without needing to earn an income. I mean, I can't imagine much better primary care, like 10X more primary care than today doesn't happen for a dollar a month. I have a hard time imagining how
that happens. It will be true, it costs almost nothing to
that happens. It will be true, it costs almost nothing to have free primary care, free education. Almost AI tutors for every personal, personal tutors for every child. That's already happening.
So there's a set of services that'll be free. There's some hard nuts to crack.
Housing is the hard one. You know, for people in the bottom half of the U.S. population, they spend 40 some percent of their income on housing and food.
U.S. population, they spend 40 some percent of their income on housing and food.
So there's some hard nuts. But I do think both are addressable by robotics and better approaches. Well, this has been a very interesting conversation. I'm excited to see where
better approaches. Well, this has been a very interesting conversation. I'm excited to see where things are headed. Thank you both for joining us here on the podcast. Thank you.
Thank you.
Loading video analysis...