LongCut logo

All things AI w @altcap @sama & @satyanadella. A Halloween Special. 🎃🔥BG2 w/ Brad Gerstner

By Bg2 Pod

Summary

## Key takeaways - **Microsoft's $134B Bet on OpenAI**: Microsoft's early conviction and investment, totaling around $134 billion for a 27% stake, was crucial for OpenAI's ability to scale and develop AI technology. [02:45] - **OpenAI's Dual Structure for Impact**: OpenAI's unique nonprofit and public benefit corporation structure, with a $130 billion nonprofit foundation, aims to ensure AGI benefits all of humanity, initially directing funds towards health and AI security. [03:18], [05:31] - **Compute Demand Outstrips Supply**: Despite massive investments and commitments, the demand for compute power, particularly GPUs, consistently outstrips supply, hindering growth for cloud providers and AI companies alike. [01:55], [15:16] - **The AGI Trigger for Partnership Terms**: Key aspects of the OpenAI-Microsoft partnership, including model exclusivity and revenue sharing, are set to expire early if Artificial General Intelligence (AGI) is verified, highlighting its significance. [08:07], [09:34] - **Navigating Regulatory Patchwork**: The proliferation of state-level AI regulations, like the Colorado AI Act, creates significant compliance challenges and risks stifling innovation, with a call for a unified federal framework. [24:24], [25:27] - **AI's Impact on Software and Jobs**: AI is fundamentally altering software architecture and workflows, potentially leading to increased productivity and margin expansion rather than direct job losses, while creating new roles focused on agent interaction. [53:54], [01:04:14]

Topics Covered

  • OpenAI's Early Bet: Conviction Over Certainty
  • OpenAI's Nonprofit Structure: A $130B Head Start
  • Compute Bottlenecks: Power and Infrastructure Constraints
  • AI agents will collapse traditional SaaS applications
  • Reindustrialization Fueled by Data Centers and Global Tech Investment

Full Transcript

Yeah, I think this has really been an

amazing partnership through every phase.

Uh we had kind of no idea where it was

all going to go when we started as Satia

said. Uh but I I don't think I think

this is one of the great tech

partnerships uh ever and without

certainly without Microsoft and

particularly SA's early conviction uh we

would not have been able to do this.

What a week. What a week. Great to see

you both. Um Sam, how's the baby?

>> Baby is great. That's the best thing

ever, man. Every every cliche is true

and it is the best thing ever.

>> Uh hey Sacha, with all your time

>> smile on Sam's face whenever he talks

about uh it's just his his baby is just

so different. It's dad that and compute

I guess when he talks about compute and

his baby.

>> U well Sachi have you given him any dad

tips with all this time you guys have

spent together?

>> I said just enjoy it. I mean it's so

awesome that uh you know I you know we

had our babies or what our children so

young and I wish I could redo it. So in

some sense it's just the most precious

time and as they grow it's just so

wonderful. I'm so glad Sam is um

>> I'm happy to be doing it older, but I do

think sometimes, man, I wish I had the

energy when I was like 25. Uh that

part's harder.

>> No doubt about it. What's the average

age at Open AI, Sam? Any idea? It's

young.

>> It's not crazy young. Not Not like Not

like most Silicon Valley startups. I

don't know, maybe low 30s average.

>> Are babies t is it are babies trending

positively or negatively?

>> Babies trending positively.

>> Oh, that's good. That's good. Yeah.

>> Well, you guys, such a big week. You

know, I was thinking about I started at

Nvidia's GTC, you know, just hit $5

trillion. Google, Meta, Microsoft,

Satcha, you had your earnings yesterday,

you know, and we heard consistently not

enough compute, not enough compute, not

enough compute. We got rate cuts on

Wednesday. The GDP's tracking near 4%.

And then I was just saying to Sam, you

know, the president's cut these massive

deals in Malaysia, South Korea, Japan,

sounds like with China. you know, deals

that really incredibly provide the

financial firepower to re-industrialize

America. 80 billion for new nuclear

fision, all the things that you guys

need to build more compute, but

certainly wasn't what wasn't lost in all

of this was you guys had a big

announcement on Tuesday that clarified

your partnership. Congrats on that. And

I thought we'd just start there. I

really want to just break down the deal

in really simple plain language to make

sure I understand it and and and others

but you know we'll just start with your

investment Satcha you know Microsoft

started investing in 2019 has invested

in the ballpark at 134 billion into open

AI and for that you get 27% of the

business ownership in the business on a

fully diluted basis I think it was about

a third and you took some dilution over

the course of last year with all the

investment

So, does that sound about right in terms

of ownership?

>> Yeah, it does. But I I would say before

even our stake in it, Brad, I think

what's pretty unique about OpenAI is the

fact that as part of OpenAI's process of

restructuring, one of the largest

nonprofit gets created. I mean, let's

not forget that, you know, in some sense

I say at Microsoft, like I, you know, we

are very proud of the fact that we were,

we're associated with the two of the

largest nonprofits, the Gates Foundation

and now the OpenAI Foundation. So,

that's I think the big news. Uh, we

obviously were, you know, are thrilled.

It's not what we thought. And as I said

to somebody, it's not like when we first

invested our billion dollars that, oh,

this is going to be the 100 bagger that

I'm going to be talking about to VCs

about, but here we are. But we are very

thrilled to be an investor and an early

backer. Um and and it's a great and it's

a really a testament to what Sam and

team have done quite frankly. I mean

they obviously had the vision early

about what this technology could do and

they ran with it and just executed you

know in a masterful way.

>> Yeah. I think this has really been an

amazing partnership through every phase.

Uh we had kind of no idea where it was

all going to go when we started as Satia

said. Uh but I I don't think I think

this is one of the great tech

partnerships uh ever and without

certainly without Microsoft and

particularly Sant's early conviction uh

we would not have been able to do this.

I don't think there were a lot of other

people that would have uh been willing

to take that kind of a bet given what

the world looked like at the time. Um we

didn't know exactly how the tech was

going to go. Well, not exactly. We

didn't know at all how the tech was

going to go. We just had a lot of

conviction in this this one idea of

pushing on on deep learning and trusting

that if we could do that, we'd figure

out ways to make wonderful products and

create a lot of value and also, as Satia

said, create what we believe will be the

largest nonprofit ever. And I think it's

going to do amazingly great things. It

it was I I really like the structure

because it lets the nonprofit grow in

value while the PBC is able to get the

capital that it needs to keep scaling. I

don't think the nonprofit would be able

to be this valuable if we didn't come up

with the structure and if we didn't have

partners around the table that were

excited for it to work this way. But,

you know, I think it's been six more

than six years since we first started

this partnership and uh a pretty crazy

amount of achievement for six years and

I think much much more to come. I hope

that Sasha makes a trillion dollars on

the investment, not hundred billion, you

know, whatever it is.

>> Well, as part of the restructuring, you

guys talked about it. You have this

nonprofit on top and a public benefit

corp below. It's pretty insane. The

nonprofit is already capitalized with

$130 billion. $130 billion of Open AI

stock. It's one of the largest in the

world out of the gates. It could end up

being much much larger. The California

Attorney General said they're not going

to object to it. You already haveund

this 130 billion dedicated to making

sure that AGI benefits all of humanity.

You announced that you're going to

direct the first 25 billion to health

and AI security and resilience. Sam,

first let me just say, you know, as

somebody who participates in the

ecosystem, kudos to you both. It's

incredible this contribution to the

future of AI. But Sam, talk to us a bit

about the importance of the the choice

around health and and resilience. And

then help us understand how do we make

sure that you get maximal benefit

without it getting weighted down as

we've seen with so many nonprofits with

its own political biases.

>> Yeah. First of all, the the best way to

create a bunch of value for the world is

hopefully what we're we've already been

doing, which is to make these amazing

tools and just let people use them. And

I think capitalism is great. I think

companies are great. I think people are

doing amazing work getting advanced AI

into the hands of a lot of people and

companies. They're doing incredible

things. There are some areas where the I

think market forces don't quite work for

what's in the best interest of people

and you do need to do things in a

different way. Uh there are also some

new things with this technology that

just haven't existed before like the

potential to use AI to do science at a

rapid clip like really truly automated

discovery. And when we thought about the

areas we wanted to first focus on,

clearly if we can cure a lot of disease

and make the data and information for

that broadly available, that would

that'd be a wonderful thing to do for

the world. And then on this point of AI

resilience, I do think some things may

get a little strange and they won't all

be addressed by companies doing their

thing. So as the world has to navigate

through this transition, if we can fund

some work to help with that, and that

could be, you know, cyber defense, that

could be AI safety research, that could

be economic studies, all of these

things, helping society get through this

transition smoothly. We're very

confident about how great it can be on

the other side, but you know, I'm sure

there will be some choppiness along the

way.

>> Let's keep busting through the the the

um the deal. So models and exclusivity

Sam OpenAI can distribute its models uh

its leading models on Azure but I don't

think you can distribute them on any

other leading the big clouds for seven

years until 2032 but that would end

earlier if AGI is verified. We can come

back to that but you can distribute your

open source models Sora agents codecs

wearables everything else on other

platforms. So Sam, I assume this means

no chat GPT or GPT6 on Amazon or Google.

>> No. So, so we have a C. First of all, we

want to do lots of things together to

help, you know, create value for

Microsoft. We want them to do lots of

things for to create value for us. And

there are many many things that'll

happen in that category. Um, we are

keeping what Satia termed once and I

think it's a great phrase of stateless

APIs on Azure exclusively through 2030.

And everything else we're going to, you

know, distribute elsewhere and that's

obviously in Microsoft's interest, too.

So, we'll put lots of products, lots of

places, and then this thing we'll we'll

do on Azure and people can get it there

or or via us. And I think that's great.

>> And then the rev share, there's still a

rev share that gets paid by OpenAI to

Microsoft on all your revenues that also

runs until 2032 or until AGI is

verified. So, let's just assume for the

sake of argument, I know this is

pedestrian, but it's important that the

rev share is 15%. So that would mean if

you had 20 billion in revenue that

you're paying three billion to Microsoft

and that counts as revenue to Azure.

Satcha, is that does that sound about

right?

>> Yeah, we have a rev share and I think as

you characterized it is either going to

AGI or till the end of the term. Uh and

I actually don't know exactly where we

count it quite honestly whether it goes

into Azure or somewhere else. That's a

good question. It's a good question for

Amy. Given that both exclusivity and the

revshare end early in the case AGI is

verified, it seems to make AGI a pretty

big deal. And as I understand it, you

know, if if OpenAI claimed AGI, it

sounds like it goes to an expert panel.

And you guys basically select a jury

who's got to make a relatively quick

decision whether or not AGI has been

reached. Satcha, you said on yesterday's

earning call that nobody's even close to

getting to AGI and you don't expect it

to happen anytime soon. You talked about

this spiky and jagged intelligence. Sam,

I've heard you perhaps sound a little

bit more bullish on, you know, when we

might get to AGI. So, I guess the

question is to you both. Do you worry

that over the next two or three years

we're going to end up having to call in

the jury to effectively make a uh a call

on whether or not we've hit AGI?

>> I I realize you got to try to make some

drama between us here. I

>> you know, I think putting a process in

place for this is a good thing to do. I

expect that the technology will take

several surprising twists and turns and

we will continue to be good partners to

each other and figure out what makes

sense.

>> That's well said. I think uh and that's

one of the reasons why I think this

process we put in place is a good one

and at the end of the day I'm a big

believer in the fact that intelligence

uh capability wise is going to continue

to improve and our real goal quite

frankly is that which is how do you put

that in the hands of people and

organizations so that they can get the

maximum benefits and that was the

original mission of open AI that

attracted me to open AAI and Sam and

team and that's kind of what we plan to

continue on

>> Brad to say the obvious if we had super

intelligence tomorrow, we would still

want Microsoft's help getting this

product out into people's hands and we

want them like Yeah,

>> of course. Of course. Yeah. No, it again

I'm asking the questions I know that are

on people's minds and that makes a ton

of sense to me. Obviously s Microsoft is

one of the largest distribution

platforms in the world. You guys have

been great partners for a long time. But

I think it dispels some of the myths

that are out there. But let's shift

gears a little bit. You know, obviously

OpenAI is one of the fastest growing

companies in history. Satcha, you said

on the pod a year ago, this pod, that

every new phase shift creates a new

Google and the Google of this phase

shift is already known and it's open AI.

And none of this would have been

possible had you guys not made these

these huge bets. With all that said, you

know, OpenAI's revenues are still a

reported 13 billion in 2025. And Sam, on

your live stream this week, you talked

about this massive commitment to

compute, right? 1.4 4 trillion over the

next four or five years with you know

big commitments 500 million to Nvidia

300 million to AMD and Oracle 250

billion to Azure. So I think the single

biggest question I've heard all week and

and hanging over the market is how you

know how can a company with 13 billion

in revenues make 1.4 4 trillion of spend

commitments, you know, and and and

you've heard the criticism, Sam.

>> First of all, we're doing well more

revenue than that. Second of all, Brad,

if you want to sell your shares, I'll

find you a buyer.

>> I just enough like, you know, people are

I I think there's a lot of people who

would love to buy OpenAI shares. I don't

I don't think you

>> including myself, including myself,

>> people who talk with a lot of like

breathless concern about our comput

stuff or whatever that would be thrilled

to buy shares. So I think we we could

sell you know your shares or anybody

else's to some of the people who are

making the most noise on Twitter

whatever about this very quickly. We do

plan for revenue to grow steeply.

Revenue is growing steeply. We are

taking a forward bet that it's going to

continue to grow grow and that not only

will Chhatabt keep growing but we will

be able to become one of the important

AI clouds that our consumer device

business will be a significant and

important thing that AI that can

automate science will create huge value.

So, you know, there are not many times

that I want to be a public company, but

one of the rare times it's appealing is

when those people are writing these

ridiculous OpenAI is about to go out of

business and, you know, whatever. I

would love to tell them they could just

short the stock and I would love to see

them get burned on that. Um, but

you know, I we carefully plan, we

understand where the technology, where

the capability is going to grow, go and

and how the products we can build around

that and the revenue we can generate. we

might screw it up like this is the bet

that we're making and we're taking a

risk along with that. A certain risk is

if we don't have the compute, we will

not be able to generate the revenue or

make the models at these at this kind of

scale.

>> Exactly. And

>> let me just say one thing uh Brad as

both a partner and um an investor there

is not been a single business plan that

I've seen from OpenAI that they have put

in and not beaten it. So in some sense

this is the one place where you know in

terms of their growth and just even the

business it's been unbelievable

execution quite frankly I mean obviously

openai everyone talks about all the

success in the usage and what have you

but even um I would say all up uh the

business execution has been just pretty

unbelievable. I heard Greg Brockman say

on C CBC a couple weeks ago, right? If

we could 10x our compute, we might not

have 10x more revenue, but we'd

certainly have a lot more revenue

>> simply because of lack of compute power.

Things like, yeah, it's just it's really

wild when I just look at how much we are

held back. And in many ways, we have,

you know, we've scaled our compute

probably 10x over the past year, but if

we had 10x more compute, I don't know if

we'd have 10x more revenue, but I don't

think it'd be that far. And we heard

this from you as well last night Satcha

that you were compute constrained and

growth would have been higher even if if

you had more compute. So help us

contextualize Sam maybe like how compute

constrained do you feel today and do you

when you look at the buildout over the

course of the next two to three years do

you think you'll ever get to the point

where you're not compute constrained?

>> We talk about this question of is there

ever enough compute a lot. I I think the

answer is

the only the best way to think about

this is like a

energy or something. You can talk about

demand for energy at a certain price

point, but you can't talk about demand

for energy without talking about at

different

you know different demand at different

price levels. If the price of compute

per like unit of intelligence or

whatever, however you want to think

about it, fell by a factor of a 100

tomorrow, you would see usage go up by

much more than 100 and there'd be a lot

of things that people would love to do

with that compute that just make no

economic sense at the current cost, but

there would be new kind of demand. So I

think the the

now on the other hand as the models get

even smarter and you can use these

models to cure cancer or discover novel

physics or drive a bunch of humanoid

robots to construct a space station or

whatever crazy thing you want then maybe

there's huge willingness to pay a much

higher rate cost per unit of

intelligence for a much higher level of

intelligence that we don't know yet but

I would bet there will be. So I I think

when you talk about capacity it's it's

like a you know cost per unit and you

know capability per unit and you have to

kind of without those curves it's sort

of a madeup it's not a super well

specified problem.

>> Yeah. I mean I think the one thing that

you know Sam you've talked about which I

think is the right way is to think about

is that if intelligence is what a log of

compute then you try and really make

sure you keep getting efficient and so

that means the tokens per dollar per

watt uh and the economic value that the

society gets out of it is what we should

maximize and reduce the costs and so

that's where if you sort of where like

the Jevans paradox point is that right

which is you keep reducing it

commoditizing in some sense intelligence

uh so that it becomes the real driver of

GDP growth all around.

>> Unfortunately, it's something closer to

uh log of intelligence equals log of

compute. But we may figure out better

scaling laws and we may figure out how

to beat this. Yeah,

>> we heard from both Microsoft and Google

yesterday. Both said their cloud

businesses would have been growing

faster if they have more GPUs. You know,

I asked Jensen on this pod if there was

any chance over the course of the next 5

years we would have a compute glut. and

he said it's virtually non-existent

chance in the next 2 to 3 years and I

assume you guys would both agree with

Jensen that while we can't see out 5 6 7

years certainly over the course of the

next 2 to three years for the for the

reasons we just discussed that it's

almost a non-existent chance that you

have excess compute well I mean I think

the the cycles of demand and supply in

this particular case you can't really

predict right I mean even the the point

is What's the secular trend? The secular

trend is what Sam said, which is at the

end of the day, because quite frankly,

the the biggest issue we are now having

is not a compute glut, but it's a power

and it's sort of the ability to get the

builds done fast enough close to power.

So, if you can't do that, you may

actually have a bunch of chips sitting

in inventory that I can't plug in. In

fact, that is my problem today, right?

It's not a supply issue of chips. It's

actually uh the fact that I don't have

warm shells to plug into. And so how

some supply chain constraints emerge

tough to predict uh because the demand

is just going you know is tough to

predict right I mean I wouldn't you it's

not like Sam and I would want to be

sitting here saying oh my god we're less

short on compute it's because we just

were not that good at being able to

project out what the demand would really

look like. So I think that that's and by

the way the worldwide side right one

it's one thing to sort of talk about one

segment in one country but it's about

you know really getting it out to

everywhere in the world and so there

will be constraints and how we work

through them is going to be the most

important thing it won't be a linear

path for sure there there will come a

glut for sure and whether that's like in

two to three years or five to six I

can't tell you but uh like it's going to

happen at some point probably several

points along the way like this is

there's something deep about human

psychology here and bubbles and also as

Satia said like there's it's such a

complex supply chain weird stuff gets

built the technological landscape shifts

in big ways so you know if

a very cheap form of energy comes online

soon at mass scale then a lot of people

are going to be extremely burned with

existing contracts they've signed it I

if if we can continue this unbelievable

reduction in cost per unit of

intelligence let's say it's been

averaging like 40x X for a given level

per year. You know, that's like a very

scary exponent

from an infrastructure buildout

standpoint. Now, again, we're taking the

bet that there will be a lot more demand

as that gets cheaper, but I have some

fear that it's just like, man, we keep

going with these breakthroughs and

everybody can run like a personal AGI on

their laptop and we just did an insane

thing here. Some people are going to get

really burned like has happened in every

other tech infrastructure cycle at some

points along the way.

>> I think that's really well said and you

have to hold those two simultaneous

truths. We had that happen in 20201 and

yet the internet became much bigger and

produced much greater outcomes for

society than anybody estimated in that

period of time.

>> Yeah. But I think that the one thing

that Sam said is not talked about enough

which is the current for example the

optimizations that OpenAI has done on

the inference stack for a given GPU. I

mean I it's kind of like it's you know

we talk about the MOS law improvement on

one end but the software improvements

are much more exponential than that.

Someday we will make a incredible

consumer device that can run a GPT5 or

GPD6 capable model completely locally at

a low power draw. And this is like so

hard to wrap my head around.

>> That will be incredible. And you know

that's the type of thing I think that

scares some of the people who are

building obviously these large

centralized compute uh stacks. And

Satcha you've talked a lot about the

distribution both to the edge as well as

having inference capability distributed

around the world. Yeah, I mean the way

at least I've thought about it is more

about really building a fungeable fleet.

I mean when I look at sort of in the

cloud infrastructure business, one of

the key things you have to do is have

two things. One is an effic like in this

context in a very efficient token

factory and then high utilization.

That's that's it. There are two simple

things that you need to achieve and in

order to have high utilization you have

to have multiple workloads that can be

scheduled even on the training. I mean,

if you look at the AI pipelines, there's

pre-training, there's mid-training,

there's post- training, there's RL. You

want to be able to do all of those

things. So, thinking about fungeibility

of the fleet is everything for a cloud

provider.

>> Okay. So, Sam, you referenced, you know,

and and Reuters was reporting yesterday

that OpenAI may be planning to go public

late 26 or in 27.

>> No, no, no. We we don't we don't have

anything that specific. I I'm a realist.

I assume it will happen someday, but

that was uh I don't know why people

write these reports. We don't have like

date in mind decision to do this or

anything like that. I just assume it's

where things will eventually go.

>> But it does seem to me if you guys were,

you know, are are doing in excess of

hundred billion dollars of revenue in 28

or 29 that you at least would be in pos

>> what?

>> How about 27?

>> Yeah, 27 even better. You are in

position to do an IPO and the rumored

trillion dollars. Again, just to

contextualize for listeners, if you guys

went public at 10 times 100 billion in

revenue, right, which would be, I think,

a lower multiple than Facebook went

public at, a lower multiple than a lot

of other uh big consumer companies went

public at, that would put you at a

trillion dollars. If you floated 10 to

20% of the company, that raises a

hundred to$200 billion, which seems like

that would be a good path to fund a lot

of the growth and a lot of the stuff

that we just talked about. So, you're

you're you're not opposed to it. You're

not But you guys are making fund the

company with revenue growth, which is

what I would like us to do.

>> But no doubt about it.

Well, I've also said I think that this

is such an important company and you

know there are so many people including

my kids who like to trade their little

accounts and they use chat GPT and I

think having retail investors have an

opportunity to buy one of the most

important and largest

>> honestly that that is probably the

single most appealing thing about it to

me. Um that would be really nice.

One of the things I've talked to you

both about um shifting gears again is

part of the big beautiful bill, you

know, Senator Cruz had included federal

preeemption so that we wouldn't have

this state patchwork 50 different laws

that mireers the industry down in kind

of needless compliance and regulation.

unfortunately got killed at the last

second by Senator Blackburn because

frankly I think AI is pretty poorly

understood in Washington and there's a

lot of dumerism I think that has gained

traction in Washington. So now we have

state laws like the Colorado AI act that

goes into full effect in February I

believe that creates this whole new

class of litigants anybody who claims

any unfair impact from an algorithmic

discrimination in a chatbot. So somebody

could claim harm for countless reasons.

Sam, how worried are you that, you know,

having this state patchwork of AI, you

know, poses real challenges to, you

know, our ability to continue to

accelerate and compete around the world.

>> I don't know how we're supposed to

comply with that California, sorry,

Colorado law. I would love them to tell

us uh and, you know, we'd like to be

able to do it, but that's just from what

I've read of that. That's like a I

literally don't know what we're supposed

to do. I'm very worried about a 50-state

patchwork. I think it's a big mistake. I

think it's there's a reason we don't

usually do that for these sorts of

things. I think it'd be bad.

>> Yeah. I mean, I think the the

fundamental problem of um you know, this

patchwork approach is quite frankly, I

mean, between OpenAI and Microsoft,

we'll figure out a way to navigate this,

right? I mean, uh we can figure this

out. The problem is anyone starting a

startup and trying to kind this it's

sort of it just goes to the exact

opposite of I think what the intent here

is which obviously safety is very

important making sure that the

fundamental um you know concerns people

have are addressed but there's a way to

do that at the federal level so I think

the U if we don't do this again you know

EU will do it and then that'll cause its

own issues so I think if US leads it's

better uh as you as one regulatory

framework

>> for sure.

>> And to be clear, it's not that one is

advocating for no regulation. It's

simply saying let's have, you know,

agreed upon regulation at the federal

level as opposed to 50 competing state

laws which certainly uh firebombs the

the AI startup industry and I think it

makes it makes it super challenging even

for companies like yours who can afford

to defend all these cases.

>> Yeah. And I would just say quite frankly

my hope is that this time around even

across EU and the United States like

that'll be the dream right quite frankly

for any European startup.

>> I don't think that's going to happen.

>> What is that?

>> That would be great. I don't I wouldn't

hold your breath for that one. That

would be great. No, but I I I really

think that if you think about it right,

if you sort of if anyone in Europe is

thinking about their you know what how

can they participate in this AI uh

economy with their companies uh this

should be the main concern there as

well. So therefore uh that's I hope

there is some enlightened approach to it

but I agree with you that you know today

I wouldn't bet on that.

I do think that with Sachs as the AIS

are, you at least have a president that

I think might fight for that in terms of

coordination of of AI policy, using

trade as a lever to make sure that, you

know, we don't end up with overly

restricted European policy. But we shall

see. I think first things first, federal

preeemption in the United States is

pretty critical. You know, we've been

down in the weeds a little bit here,

Sam. So, I want to telescope out a

little bit. You know, I've heard people

on your team talk about all the great

things coming up and and as you start

thinking about much more unlimited

compute chat GPT6 and beyond robotics,

physical devices,

scientific research as you as you look

forward to 2026, what do you think

surprises us the most? What what what

what are you most excited about in terms

of what's on the drawing board? you I

mean you just hit on a lot of the key

points there. I I think

codeex has been a very cool thing to

watch this year and as these go from

multi-our tasks to multi-day tasks which

I expect to happen next year what people

be able to do to create

software at an unprecedented rate and

and really in fundamentally new ways.

I'm very excited for that. I think we'll

see that in other industries too. I have

like a bias towards coding. I understand

that one better. I think we'll see that

really start to transform what people

are capable of. I I I hope for very

small scientific discoveries in 2026,

but if we can get those very small ones,

we'll get bigger ones in future years.

That's a really crazy thing to say is

that like AI is going to make a novel

scientific discovery in 2026. Even a

very small one. This is like this is a

wildly important thing to be talking

about. So, I'm excited for that.

Certainly, robotics and computer and new

kind of computers in future years.

That'll be that'll be uh very important.

But

yeah, my personal bias is if we can

really get AI to do science here, that

is I mean that is super intelligence in

some sense. Like if if this is expanding

the total sum of human knowledge that is

a crazy big deal.

>> Yeah. I mean I think one of the things

to use your codeex example I think the

combination of the model capability I

mean if you think about the magical

moment that happened with chat GPT was

the UI that met intelligence that just

took off right there it's just you know

unbelievable right form fact and some of

it was also the instruction following

piece of model capability was ready for

chat I think that that's what the codeex

and the you know these coding agents are

about to uh help us which is what's that

you know coding agent goes off for a

long period of time comes back and then

I'm then dropped into what I should

steer like one of the metaphors I think

we're all sort of working towards is I

do this macro delegation and micro

steering what is that UI meets this new

intelligence capability and you can see

the beginnings of that with codeex right

the way at least I use it inside a

GitHub copilot is I you know it's Now,

it's just a it's a just a different way

than the chat interface. And I think

that that I think would be a new way for

the human computer interface. Quite

frankly, it's probably bigger than

>> uh that that might be the departure.

>> That's one reason I'm very excited that

we're doing new form factors of

computing devices cuz computers were not

built for that kind of workflow very

well. Certainly, a UI like Chacht is

wrong for it. But this idea that you can

have a device that is sort of always

with you but able to go off and do

things and get micro steer from you when

it needs and have like really good

contextual awareness of your whole life

and flow. And I think that'll be cool.

>> And what neither of you have talked

about is the consumer use case. I think

a lot about, you know, again, we go

under this device and we have to hunt

and peck through a hundred different

applications and fill out little web

forms, things that really haven't

changed in 20 years. But to just have,

you know, a personal assistant that we

take for granted perhaps that we

actually have a personal assistant, but

to give a personal assistant for

virtually free to billions of people

around the world to improve their lives,

whether it's, you know, ordering diapers

for their kid or whether it's, you know,

booking their hotel or or or making

changes in their calendar. I think

sometimes it's the pedestrian that's

that's the most impactful. And as we

move from answers to memory and actions

and then the ability to interface with

that through an earbud or some other

device that doesn't require me to

constantly be st staring at this

rectangular piece of glass. I think it's

pretty extraordinary.

>> I think that that's what Sam was

teasing.

>> Yeah. Yeah.

>> Hope we get it right. I got to drop off

unfortunately.

>> Sam, it was great to see you. Thanks for

joining us. Congrats again on this big

step forward and we'll talk soon.

>> Thanks for letting me crash.

>> See you Sam. Take care. See you.

>> As Samwell knows, we're certainly a

buyer, not a seller. Um, but but but

sometimes, you know, I think it's

important because the world, you know,

we're a pretty small, we spend all day

long thinking about this stuff, right?

And so conviction, it comes from the

10,000 hours we've spent thinking about

it. But the reality is we have to bring

along the rest of the world. And the

rest of the world doesn't spend 10,000

hours thinking about this. Um, and

frankly they look at some things that

appear overly ambitious, right, and get

worried about whether or not we can pull

those things off. You took this idea to

the board in 2019 to invest a billion

dollars into open AI. Was it a

no-brainer in the boardroom? You know,

did you have to expend any political

capital to get it done? dish dish for me

a little bit like what that moment was

was like because I think it was such a

pivotal moment not just for Microsoft

not just for the country but I really do

think for the world. Yeah, I mean it's

it's interesting when you look back the

the journey when I look at it it's been

a you know we were involved even in 2016

uh when initially open AI uh started in

fact Azure was even the first sponsor I

think and then they were doing a lot

more reinforcement learning at that time

I remember the Dota 2 competition I

think happened on Azure and then uh they

moved on to other things and you know I

was interested in RL but quite frankly

you know it speaks a little bit to your

10,000 hours or the prepared had mind.

Uh Microsoft since 1995 was obsessed. I

mean, Bill's obsession for the company

was natural language. Natural language.

I mean, after all, we're a coding

company. We're information work company.

>> So, it's when Sam in 2019 started

talking about text and natural language

and transformers and scaling laws.

>> Uh that's when I said, "Wow, like this

is an interesting I mean he, you know,

this is a team that was going in the

direction or the direction of travel was

now clear. it had a lot more overlap

with our interest. So in that sense it

was a no-brainer. Obviously you go to

the board and say hey I have an idea of

taking a billion dollars and giving it

to this crazy structure which we don't

even kind of understand what is it. It's

a nonprofit blah blah blah and and

saying go for it. Uh there was a debate.

Uh Bill was kind of rightfully so

skeptical because and then he became

like once he saw the GPD4 demo like that

was like the thing that Bill's talked

about publicly where uh when he saw it

he said it's the best demo he saw after

you know what Charles Simony showed him

at Xerox Park and but you know quite

honestly none of us could uh so the

moment for me was that you know let's go

give it a shot then seeing the early

codeex inside of uh copilot inside of uh

GitHub copilot and seeing just the code

completions and seeing it work. That's

when I would say we I I felt like I can

go from 1 to 10 because that was the big

call quite frankly. One was

controversial.

>> Uh but the 1 to 10 was what really made

this entire era possible and then

obviously uh the great execution by the

team and the productization on their

part, our part. I mean if I think about

it right the collective monetization

reach of GitHub copilot chat GPT

Microsoft 365 copilot and co-pilot you

add those four things that is it right

that's the biggest sort of AI set of

products uh out there on the planet and

that's um you know what obviously has

let us sustain all of this and I think

not many people know that your CTO Kevin

Scott you know an ex googler lives down

here in Silicon Valley and to

contextualize it right Microsoft had

missed out on search had missed out on

mobile. You become CEO, almost had

missed out on the cloud, right? You

you've described it, caught the last

train out of town to capture the cloud.

And I think you were pretty determined

to have eyes and ears down here so you

didn't miss the next big thing. So I

assume that Kevin played a good role for

you as well.

>> Absolutely.

>> Deep Seek and Open AI.

>> Yeah. I mean I mean if uh it's in fact I

would say Kevin's conviction uh and

Kevin was also skeptical like that was

the thing I I I always watch for people

who are skeptical who change uh their

opinion because to me that's a signal so

I'm always looking for someone who's a

non-believer in something and then

suddenly changes and then they get

excited about it that I have all the

time for that because I'm then curious

why what and so Kevin started with all

of us were kind of skeptical Right. No,

I mean in some sense it defies the the

you know we're all having gone to school

and said god you know there must be an

algorithm to crack this versus just

let's scaling laws and throw compute.

But quite frankly uh Kevin's conviction

that this is worth going after is one of

the big things that drove this. Well, we

talk about, you know, that that

investment that that's now worth 130

billion, I suppose, could be worth a

trillion someday, as Sam says, but it

really in many ways understates the

value of the partnership, right? So, you

have the value in the revshare, billions

per year going to Microsoft. You have

the profit you make off the $250 billion

of the Azure compute commitment from

OpenAI. And of course you get huge sales

from the exclusive distribution of the

API. So talk to us how you think about

the value across those domains

especially how this exclusivity has

brought a lot of customers who may have

been on AWS to Azure.

>> Yeah. No absolutely. I mean so to us um

if I look at it um you know aside from

all the uh the equity parts the real

strategic thing that comes together and

that remains going forward uh is that

stateless API exclusivity on Azure that

helps quite frankly both open AAI and us

and our customers uh because when

somebody in the enterprise uh is trying

to build an application they want an API

that's stateless they want to mix it up

with uh compute in storage, put a

database underneath it to capture state

and build a full workload and that's

where uh you know Azure coming together

with this API and so what we're doing

with even uh Azure foundry right because

in some sense you let's say you want to

build an AI application but the key

thing is uh how do you make sure that

the eval

are great so that's where you need even

a full app server in Foundry that's what

we've done and so therefore I feel that

that is the way we will go to market in

our infrastructure business. The other

side of the value capture for us is

going to be incorporating all this IP.

Not only we have the exclusivity of the

model in uh Azure but we have access to

the IP. I mean having a royaltyfree

let's even forgetting all the the

knowhow and the knowledge side of it but

having royalty-free access all the way

till seven more years gives us a lot of

flexibility business model wise. It's

kind of like having a frontier model for

free uh in some sense if you're an MSFT

shareholder. That's kind of where you

should start from is to think about we

have a frontier model that we can then

deploy whether it's in GitHub, whether

it's in M365, whether it's in our

consumer copilot, then add to it our own

data, post train it. Uh so that means we

can have it embedded in the weights

there. And so therefore we're excited

about the value creation on both the

Azure and the infrastructure side as

well as in our high value domains uh

whether it is in health whether it's in

knowledge work whether it's in coding or

security

>> you've been consolidating the losses

from open AI you know I think you you

just reported earnings yesterday I think

you consolidated 4 billion of losses in

the quarter do you think that investors

are I mean they may even be attributing

negative value right because of the

losses you know as they apply their

multiple of earnings. Satcha, whereas I

hear this and I think about all of those

benefits we just described, not to

mention the look through equity value

that you own in a company that could be

worth a trillion unto itself. You know,

do you think that the market is is is

kind of misunderstanding the value of

open AI as a component of Microsoft?

>> Yeah, that's a good one. So, I think the

the approach that Amy is going to take

is full transparency because at some

level I'm no accounting expert. So

therefore the best thing to do is to

give uh all of the transparency I think

this time around as well. I think that's

why the non-GAAP gap so that at least

people can see the EPS numbers because

the the the common sense way I look at

it Brad is simple. If you've invested

let's call it 13.5 billion. You can of

course lose 13.5 billion but you can't

lose more than 13.5 billion. At least

the last time I checked that's what you

have at risk. You could also say hey the

$135 billion that has you know today our

equity stake you know is sort of illquid

what have you we don't plan to sell it

so therefore it's got risk associated

with it but the real story I think you

were pulling is all the other things uh

that are happening what's happening with

Azure growth right would Azure be

growing if we had not sort of had the

openi partnership to your point the

number of customers who came from other

clouds

for the first time right this is the

thing that really we benefited from

what's happening with Microsoft 365. In

fact, one of the things about Microsoft

365 was what was the next big thing

after E5? Guess what? We found it in

copilot. It's bigger than any suite.

Like you know, we talk about penetration

and usage uh and the pace. It's bigger

than anything we've done in our

information work which we've been added

for decades. And so so we pretty feel

very very good about the opportunity to

create value for our shareholders. Uh

and then at the same time be fully

transparent so that people can look

through the what are the losses. I mean

who knows what the accounting rules are

but we will do whatever is needed and

people will then be able to see what's

happening. But a year ago, Satcha, there

were a bunch of headlines that Microsoft

was pulling back on AI infrastructure,

right? Fair or unfair, they're they were

out there, you know, and and and perhaps

you guys were a little more

conservative, a little more skeptical of

what was going on. Amy said on the call

last night, though, that you've been

short power and infrastructure for many

quarters, and she thought that you would

catch up, but you haven't c caught up

because demand keeps increasing. So I

guess the question is were you too

conservative you know knowing what you

know now and and and what's the road map

from here?

>> Yeah it's a great question because see

the the thing that we realized and I'm

glad we did uh is that the concept of

building a fleet that truly was funible

fungeible for all the parts of the life

cycle of AI funible across geographies

and fungeible across generations. Right?

So because one of the key things is when

you have let's take even uh what Jensen

and team are doing right I mean they're

at a pace in fact one of the things I

like is the speed of light right we now

have GB300's bringing you know that

we're bringing up so you don't want to

have ordered a bunch of GB200's that are

getting plugged in only to find that

GB2300s are in full production. So you

kind of have to make sure you're

continuously modernizing, you're

spreading the fleet all over, you are

really truly funible by workload uh and

you're adding to that the software

optimizations we talked about. So to me

that is the decision we made and we said

look sometimes you may have to say no to

some of the demand including some of the

open AI demand right because sometimes

you know Sam may say hey we build me a

dedicated you know big you know whatever

multi- gigawatt data center in one

location for training makes sense from

an open AI perspective doesn't make

sense from a long-term infrastructure

buildout for Azure and that's where I

thought they did the right thing to give

them flexibility to go procure that from

others while m maintaining uh again a

significant book of business from open

AAI but more importantly giving

ourselves the flexibility with other

customers our own one P remember like

one of the things that we don't want to

do is be short on uh is you know we talk

about Azure in fact some of times our

investors are overly fixated on the

Azure number but remember for me the

high margin business for me is co-pilot

it is security co-pilot it's GitHub

co-pilot it's the healthcare co-pilot So

we want to make sure we have a balanced

way to approach the returns that the

investors have. And so that's kind of

one of the other misunderstood perhaps

in our investor base in particular,

which I find pretty strange and funny

because I think they they want to hold

Microsoft because of the portfolio we

have. But man are they fixated on the

growth number of one little thing called

Azure. On that point, Azure grew 39% in

the quarter on a staggering $93 billion

run rate. And you know, I think that

compares to GCP that grew at 32% and AWS

closer to 20%. But could Azure because

you did give compute to 1P and because

you did give compute to research, it

sounds like Azure could have grown 41

42% had you had more compute to offer.

>> Absolutely. Absolutely. There's no

question. There is no question. So

that's why I think the internal thing is

to balance out what we think again is in

the long-term interests of our

shareholders and uh and also to serve

our customers well and also not to kind

of you know one of the other things was

you know people talk about concentration

risk right we obviously want a lot of

open AI but we also want other customer

and so we're shaping the demand here you

know we are in a supply you know you

know we're not demand constraint we're

supply constraint so we are shaping the

demand such that it matches is the

supply in the optimal way with the

long-term uh view.

>> To that point, Satcha, you you talked

about 400 billion. It's incredible

number of remaining performance

obligations. Last night, you said that,

you know, that's your booked business

today. It'll surely go up tomorrow as

sales continue to come in. And you said

you're going to, you know, your need to

build out capacity just to serve that

backlog is very high. You know, how

diversified is that backlog to your to

your point? And how confident are you

that that 400 billion does turn into

revenue over the course of the next

couple years?

>> Yeah, that that 400 billion uh has a

very short duration as Amy explained.

It's the 2-year uh duration on average.

So that's definitely uh our intent.

That's one of the reasons why uh we're

spending the capital outcllay with high

certainty that we just need to clear

this backlog. And to your point, it's

pretty diversified both on the 1 P and

the 3P. our own demand is quite frankly

pretty high for our one first party uh

and even amongst third party one of the

things we now are seeing is the the rise

of all the other companies building real

workloads uh that are scaling uh and so

given that I think we feel very good I

mean obviously it's uh that's one of the

best things about RPO is you can be

planful quite frankly and so therefore

we feel very very good about building

and then this doesn't include obviously

the additional demand that we're already

going to start seeing including the 250

uh you know which will have a longer

duration and we'll build accordingly

>> right so there are a lot of new entrance

right uh in this race to build out

compute Oracle coreweave cruso etc and

normally we think that will compete away

margins but you've somehow managed to

build all this out while maintaining

healthy operating margins at Azure so I

guess the question is for Microsoft how

do you compete in this world that is uh

where people are levering up, taking

lower margins while balancing that

profit and and and risk. And do you see

any of those competitors doing deals

that cause you to scratch your head and

say, "Oh, we're just setting ourselves

up for another boom and bust cycle."

>> I mean, I'd say at some level the the

good news for us has been competing even

as a hyperscaler every day. You know,

there's a lot of competition, right,

between us and Amazon and Google on all

of these, right? I mean it's sort of one

of those interesting things which is

everything is a commodity right compute

storage I remember everybody saying wow

how can there be a margin except at

scale nothing is a commodity um and so

therefore yes so we have to have our

cost structure our supply chain

efficiency our software efficiencies all

have to kind of continue to compound in

order to make sure that there's margins

uh but scale and to your point one of

the things that I really love about the

OpenAI partnership is it's gotten us to

scale, right? This is a scale game. When

you have uh the biggest workload there

is running on your cloud, that means not

only are we going to learn faster on

what it means to operate with scale,

that means your cost structure is going

to come down faster than anything else.

And guess what? That'll make us price

competitive. And so I feel pretty

confident about our ability to, you

know, have margins. And and that this is

where the portfolio helps. I've always

said

>> you know you know I've been forced into

giving the Azure numbers right because

at some level I never thought of

allocating I mean my capital allocation

is for the cloud from whether it is Xbox

cloud gaming or Microsoft 365 or for

Azure it's one capital outlay uh and

then everything is a meter as far as I'm

concerned from an MSF perspective it's a

question of hey the blended average of

that should match the operating margins

we need as a company because after all

otherwise why we're not a conglomerate

we're one company with one platform

logic it's not running five six

different businesses we're in these five

six different businesses only to

compound the returns on the cloud and AI

investment

>> yeah I I love that line uh nothing is a

commodity at scale you know there's been

a lot of ink and time spent even on this

podcast with my partner Bill Gurley

talking about circular revenues

including including Microsoft Stasher

credits right to OpenAI that were booked

as revenue. Do you see anything going on

like the AMD deal, you know, where they

traded 10% of their equity and, you

know, for a deal or the Nvidia deal?

Again, I don't want to be overly fixated

on concern, but I do want to address

headon what is uh being talked about

every day on CNBC and Bloomberg and

there are a lot of these overlapping

deals that are going on out there. Do

you do you when you think about that in

the context of Microsoft does any of

that worry you again as to the

sustainability or durability of uh the

AI revenues that we see in the world?

>> Yeah. I mean first of all our investment

of uh let's say that 13 and a half which

was all the training investment that was

not booked as revenue. That is the that

is the reason why we have the equity

percentage. That's the reason why we

have the 27% or 135 billion. So that was

not something some that somehow that

made it into Azure revenue. In fact, if

anything, the Azure revenue was purely

the consumption revenue of chat GPT and

anything else and the APIs they put out

that they monetized and we monetized

>> to your aspect of others. You know, to

some degree, it's always been there in

terms of vendor financing, right? So

it's not like a new concept that when

someone's building something and they

have a customer who is also building

something but they need financing you

know for whether it is in you know it's

it's sort of some they're taking some

exotic forms uh which obviously need to

be scrutinized by the investment

community but that said you know vendor

financing is not a new concept

interestingly enough we have not had to

do any of that right I mean we may have

you know really uh either invested in

OpenAI and essentially got an equity uh

stake in it for return for compute or

essentially sold them great pricing of

compute in order to be able to sort of

bootstrap them. But you know others

choose to do so differently and uh and I

think circularity ultimately will be

tested by demand because all this will

work uh as long as there is demand for

the final out output of it and up to now

that has been the case. Certainly,

certainly. Well, I want to shift uh you

know, as you said, over half your

business is software uh applications.

You know, I want to think about software

and agents. You know, last year on this

pod, you made a bit of a stir by saying

that much of application software, you

know, was this thin layer that sat sat

on top of a CRUD database. The notion

that business applications exist,

that's probably where they'll all

collapse, right, in the agent era.

Because if you think about it right,

they are essentially

crowd databases with a bunch of business

logic.

The business logic is all going to these

agents. Public software companies are

now trading at about 5.2 times forward

revenue. So that's below their 10-year

average of seven times despite the

markets being at all-time highs. And

there's lots of concern that SAS

subscriptions and margins may be put at

risk by AI. So how today is AI affecting

the growth rates of your software

products of you know those core products

and specifically as you think about

database fabric security office 360 and

then second question I guess is what are

you doing to make sure that software is

not disrupted but is instead

superpowered by AI? Yeah, I think that's

a Yeah, that's right. So, the last time

we talked about this, my my point really

there was the architecture of SAS

applications is changing because this

agent tier is replacing the old business

logic tier. And so, because if you think

about it, the way we built SAS

applications in the past was you had the

data, the logic tier, and the UI all

tightly coupled. Uh, and AI quite

frankly doesn't respect that coupling

because it requires you to be able to

decouple. And yet the context

engineering is going to be very

important. I mean take you know

something like uh office 365. One of the

things I love about uh our Microsoft 365

offering is it's low arpoo

uh high usage right I mean if you think

about it right outlook or teams or

sharepoint you pick word or excel like

people are using it all the time

creating lots and lots of data which is

going into the graph and our arpoo is

low. So that's sort of what gives me

real confidence that this AI tier with I

can meet it by exposing all my data. In

fact, one of the fascinating things

that's happened uh Brad with both GitHub

and Microsoft 365 is thanks to AI, we

are seeing alltime highs in terms of

data that's going into the graph or the

repo.

>> I mean think about it. The more code

that gets generated, whether it is

codeex or cloud or wherever, where is it

going? GitHub, more PowerPoints that get

created, Excel models that get created,

all these artifacts and chat

conversations. Chat conversations are

new docs, they're all going in to the

graph and and all that is needed again

>> for grounding. Uh so that's what you

know you turn it into a forward index

into an embedding and basically that

semantics is what you really go ground

any agent request. And so I think the

next generation of SAS applications will

have to sort of if you are high RPO low

usage then you have a little bit of a

problem. But if you are we are the exact

opposite. we are low RPO, high usage and

I think that anyone who can structure

that and then use this AI as in fact an

accelerant because I mean like if you

look at the M365 copilot price I mean

it's higher than any other thing that we

sell and yet it's getting deployed

faster and with more usage and so I feel

very good oh or coding right who would

have thought in fact take GitHub right

what GitHub did in first I don't know 15

years of its existence or 10 years of

its existence it was basically done in

the last year just because coding is no

longer a tool. It's more a substitute

for wages and so it's a very different

type of business model even kind of

thinking about the stack and where value

gets distributed. So until very

recently, right, clouds largely ran

pre-ompiled software. You didn't need a

lot of GPUs and most of the value

acrewed to the software layer to the

database to the applications like CRM

and Excel. But it does seem in the

future that these interfaces will only

be valuable, right? If they're if

they're uh intelligent, right? If

they're pre-ompiled, they're kind of

dumb. The software's got to be able to

think and to act and to advise. And that

requires you know the production of

these tokens you know dealing with the

everchanging context. And so in that

world it does seem like much more of the

value will acrue to the AI factory if

you will to you know Jensen producing

you know uh helping to produce these

tokens at uh uh the lowest cost and to

the models and maybe that the agents or

the software will acrue a little bit

less of the value in the future than

they've accured in the in the past.

Well, steelman for me. Why that's wrong?

>> Yeah. So, I think there are two things

that are necessary to try and to drive

the value of AI. One is what you

described first, which is the token

factory. And even if you unpack the

token factory, uh it's the hardware

silicon system, but then it is about

running it most efficiently with the

system software with all the

fungibility, max utilization. That's

where the hyperscaler's role is, right?

What is a hyperscaler? Is hyperscaler

like everybody says if you sort of said

hey I want to run a hyperscaler. Yeah

you could say oh it's simple. I'll buy a

bunch of servers and wire them up and

run it. It's not that right. I mean it

was that simple then there would have

been more than three hyperscalers by

now. So the hyperscaler is the knowhow

of running that max util and the token

factories. And it's not and by the way

it's going to be heterogeneous.

Obviously Jensen's super competitive.

Lisa is going to come, you know, Hawk's

going to produce things uh from

Broadcom. We will all do our own. So

there's going to be a combination. So

you want to run ultimately a

heterogeneous fleet that is maximized

for token throughput and efficiency and

so on. So that's kind of one job. The

next thing is what I call the agent

factory. Remember that a SAS application

in the modern world is driving a

business outcome. it knows how to most

efficiently use the tokens to create

some business value. Uh in fact, GitHub

copilot is a great example of it, right?

Which is, you know, if you think about

it, it the auto mode of GitHub copilot

is the smartest thing we've done, right?

So, it chooses based on the prompt which

model to use for a code completion or a

task handoff, right? That's what you and

you do that not just by, you know,

choosing in some roundrobin fashion. You

do it because of the feedback cycle. You

have you have the eval, the data loops

and so on. So the new SAS applications

as you rightfully said are intelligent

applications that are optimized for a

set of evals and a set of outcomes that

then know how to use the token facto's

output most efficiently. Sometimes

latency matters, sometimes uh

performance matters and knowing how to

do that trade uh in a smart way is where

the SAS application value is. But

overall it is going to be true that

there is a real marginal cost to

software this time around. It was there

in the cloud era too when we were doing

you know CDROMs there wasn't much of a

marginal cost you know with the cloud

there was and this time around it's a

lot more and so therefore the business

models have to adjust and you have to do

these optimizations for the agent

factory and the token factory

separately. you have a big search

business that most people don't know

about, you know, but it turns out that

that's probably one of the most

profitable businesses in the history of

the world because people are running

lots of searches, billions of searches,

and the cost of completing a search if

you're Microsoft is many fractions of a

penny, right? Doesn't cost very much to

complete a search, but the comparable

query or prompt stack today when when

you use a chatbot looks different,

right? So I guess the question is

assume similar levels of revenue in the

future for those two businesses, right?

Do you ever get to a point where kind of

that chat interaction has unit economics

that are as profitable as search? I

think that's a great point because see

search was pretty magical uh in terms of

its ad unit uh and its cost economics

because there was the index which was a

fixed cost that you could then amortize

in a much more efficient way

>> uh whereas this one you know each chat

uh to your point you have to burn a lot

more GPU cycles uh both with the intent

and the retrieval so the economics are

different so I think you do that's why I

think a lot of the early sort of

economics of chat have been the premium

model and subscription on the even on

the consumer side. So we are yet to

discover whether it's agentic commerce

or whatever is the ad unit how it's

going to be litigated but at the same

time the fact that at this point you

know I kind of know in fact I use search

uh for very very specific navigational

queries I used to say I use it a lot for

commerce but that's also shifting to my

you know co-pilot like I look at the

co-pilot mode in edge and bing uh or

copilot now they're blend ending in. So

I think that yes, I think there is going

to be a relitigation just like that we

talked about the SAS disruption. We're

in the beginning of the cheese being a

little moved in consumer economics of

that category,

>> right? I I mean and given that it's the

multi-trillion dollar this this is the

thing that's driven all the economics of

the internet, right? when you move the

economics of search for both you and

Google and it converges on something

that looks more like a personal agent, a

personal assistant chat. Um, you know,

that could end up being much much bigger

in terms of the total value delivered to

humanity, but the unit economics, you're

not just advertising this one time fixed

index.

>> That's right.

>> And so, that's right. I think that the

consumer could be worse. Yeah. the

consumer category because you are

pulling a thread on something that I

think a lot about right which is what

during these disruptions

you you kind of have to have a real

sense of where is is the what is the

category economics uh is it winner take

all um uh and both matter uh right the

the problem on consumer space always is

that there's finite amount of time uh

and so if I'm not doing one thing uh I'm

doing something else and if your

monetization is predicated on some human

interaction in particular if there was

truly agentic stuff even on consumer

that could be different. Uh whereas in

the enterprise one is it's not winner

take all and two it is going to be a lot

more friendly for agentic interaction.

So it's not like for example the per

seat versus consumption. The reality is

agents are the new seats.

>> And so you can think of it as uh the

enterprise monetization is much clearer.

The consumer monetization I think is a

little more murky. You know, we've seen

a spade of layoffs recently with Amazon

announcing some big big layoffs this

week. You know, the Mag 7 has had little

job growth over the last three years

despite really robust top lines. You

know, you didn't grow your headcount

really from 24 to 25. It's around

225,000.

You know, many attribute this to normal

getting fit, you know, just getting more

efficient coming out of co and I think

there's a lot of truth to that. But do

you think part of this is due to AI? Do

you think that AI is going to be a net

job creator? And do you see this being a

long-term positive for Microsoft

productivity? Like it feels to me like

the pie grows, but you can do all these

things much more efficiently, which

either means you your margins expand or

it means you reinvest those margin

dollars and you grow faster for longer.

I call it the golden age of margin

expansion. I'm a firm believer that the

the productivity curve does uh and will

bend in the sense that we will start

seeing some of what is the work and the

workflow in particular change, right?

there's going to be more agency for you

at a task level to get to job complete

because of the power of these tools uh

in your hand and that I think is going

to be the case. So that's why I think we

are even internally for example when you

talked about even our allocation of

tokens we want to make sure that

everybody at Microsoft standard issue

right all of them have Microsoft 365 to

the tilt in the sort of most un uh

limited way and have GitHub copilot so

that they can really be more productive

but here is the other interesting thing

Brad we're learning is there is a new

way to even learn right which is you

know how to work with agents Right? So

that's kind of like when the first when

word, excel, powerpoint all showed up in

office, you kind of we learned how to

rethink let's say how we did a forecast,

right? Right? I mean, think about it,

right? In the 80s, the forecasts were

inter office memos and faxes and what

have you. And then suddenly somebody

said, "Oh, here's an Excel spreadsheet.

Let's put it an email. Send it around.

People enter numbers and there was a

forecast."

>> Similarly, right now, any planning, any

execution starts with AI. You research

with AI. You think with AI, you share

with your colleagues and what have you.

So, there's a new artifact being created

and a new workflow being created. And

that is the rate of the pace of change

of the business process that matches the

capability of AI. That's where the

productivity efficiencies come. And so

organizations that can master that are

going to be the biggest beneficiaries

whether it's in our industry or quite

frankly in the real world.

>> And so is Microsoft benefiting from

that? You know, so let's let's think

about a couple years from now. Five

years from now at the current growth

rate will be sooner, but let's call it

five years from now, your top line is

twice as big as what it is today.

Satcha, how many more employees will you

have if you're if you're if you grow

revenue by

>> like one of the best things right now is

these examples that I'm hit with every

day from the employees of Microsoft.

There was this person who leads our

network operations, right? I mean if you

think about the amount of uh fiber we

have had to put uh for like this you

know this 2 gawatt data center we just

built out uh in fair water right and the

amount of fiber there the AI and what

have you it's just crazy right so

>> and it turns out this is a real world

asset there are I think 400 different

fiber operators we're dealing with

worldwide every time something happens

we are literally going and dealing with

all these DevOps pipelines the person

who leads it she basically said to me

you what I there's no way I'll ever get

the headcount to go do all this. Not

forget even if I even approve the

budget. I can't hire all these folks. So

she she did the next best thing. She

just built herself a whole bunch of

agents to automate the DevOps pipeline

of how to deal with the maintenance.

That is an example of you to your point

a team

>> with AI tools being able to get more

productivity. So in if you are question

I will say we will grow a headcount but

the way I look at it is that headcount

we grow will grow with a lot more

leverage than the headcount we had pre

AI

>> and that's the adjustment I think

structurally you're seeing first right

which is one you called it getting fit I

think of it as more getting to a place

where everybody is really not learning

how to rethink how they work and it's

the how not even the what even If the

what remains the constant, how you go

about it has to be relearned. And it's

the unlearning and learning process that

I think will take the next year or so,

then the headcount growth will come with

max leverage.

Yeah. No, it's a I think we're on the

verge of incredible economic

productivity growth. It does feel like

when I talk to you or Michael Dell that

most companies aren't even really in the

first inning, maybe the the first batter

in the first inning in reworking those

workflows to get maximum leverage from

these agents. But it sure feels like

over the course of the next two to three

years, that's where a lot of gains are

going to start coming from. And again, I

you know, I I certainly am an optimist.

I think we're going to have net job

gains from all of this. But I think for

those companies, they'll just be able to

grow their bottom line, their number of

employees slower than their top line.

That is the productivity gain to the

company. Aggregate all that up. That's

the productivity gain to the economy.

And then we'll just take that consumer

surplus and invest it in creating a lot

of things that didn't exist before.

>> 100%. 100%. Even in software

development, right? One of the things I

look at it is no one would say we're

going to have a challenge in having, you

know, more software engineers contribute

to our sort of society because the

reality is you look at the IT backlog in

any organization. And so the question is

all these software agents are hopefully

going to help us go and take a whack at

all of the IT backlog we have and think

of that dream of evergreen software.

That's going to be true. and then think

about the demand for software. So I

think that to your point it's the levels

of abstraction at which knowledge work

happens will change. We will adjust to

that the work and the workflow that will

then adjust itself even in terms of the

demand for the products of this

industry.

>> I'm going to end on this which is really

around the reindustrialization of

America. I've said if you add up the $4

trillion of capex that you and these and

and so many of of the big large US tech

companies are investing over the course

of the next four or five years, it's

about 10 times the size of the Manhattan

project on an inflation adjusted or GD

GDP adjusted basis. So it's a massive

undertaking for America. The president

has made it a real priority of his

administration to recut the trade deals

and it looks like we now have trillions

of dollars. South Koreans committed $350

billion dollars of investments uh just

today into the United States. And when

you think about, you know, what you see

going on in power in the United States,

both production, the grid, etc., what

you see going on in terms of this

re-industrialization,

how how do you think this is all going?

and uh you you know maybe just reflect

on where we're landing the plane here

and your level of optimism for the the

the few years ahead.

>> Yeah. No, I I I feel very very

optimistic because in some sense, you

know, Brad Smith was telling me about

sort of the economy around a Wisconsin

data center. It's fascinating. Most

people think, oh, data center that is

sort of like, yeah, uh it's going to be

one big warehouse and there's, you know,

fully automated. Uh a lot of it is true.

uh but first of all what went into the

construction uh of that data center and

the local supply chain of the data

center uh that is in some sense the

reindustrialization of the United States

as well uh even before you get to what

is happening in Arizona with the TSMC

plants or what was happening with Micron

and their investments in memory or Intel

and their fabs and what have you right

there's a lot of stuff that we will want

to start building doesn't mean we won't

have trade deals that make sense for the

United States with other countries. But

to your point, the reindustrialization

for the new economy and take making sure

that all the skills and all that

capacity from power on down, I think is

sort of very important right for us. And

in and the other thing that I also say,

Brad, it's important and this is

something that I've had a chance to talk

to President Trump as well as uh

Secretary Lutnik and others is it's

important to recognize that we as

hyperscalers of the United States are

also investing around the world. So in

other words, the United States is the

biggest investor of compute factories or

token factories uh around the world. But

not only are we attracting foreign

capital to invest in our country so that

we can re-industrialize, we are helping

whether it's in Europe or in Asia or

elsewhere in Latin America and in Africa

with our capital investments, bringing

the best American tech uh to the world

that they can then innovate on and

trust. And so both of those I think are

really bode well for the United States

long term.

>> I'm grateful for your leadership Sam is

is is really helping lead the charge at

open AI for America. I think this is a

moment where I look ahead, you know, you

can see 4% GDP growth on the horizon.

We'll have our challenges. We'll have

our ups and downs. These tend to be

stairs, you know, stairs up rather than

a line straight up and to the right. But

I for one see a level of coordination

going on between Washington and Silicon

Valley between big tech and the

re-industrialization of America that

gives me cause for incredible hope.

Watching what happened this week in Asia

uh led by the president and his team and

then watching what's happening here uh

is is super exciting. So thanks for

making the time. We're big fans. Thanks.

Thanks Satcha.

>> Thanks so much Brad. Thank you.

As a reminder to everybody, just our

opinions, not investment advice.

Loading...

Loading video analysis...