LongCut logo

Class #3 | MS&E435: Economics of the AI Supercycle Stanford University Spring '26 Apoorv Agrawal

By MS&E 435: Economics of AI

Summary

Topics Covered

  • AI Is Digital Labor That Accelerates GDP Growth
  • Energy-Abundant Markets Enable AI Infrastructure Buildouts
  • AI Infrastructure Costs $60M Per Megawatt—And It's Rising
  • AI Compute Defies Commoditization—H100 Prices Keep Rising
  • Space Data Centers: Not in 5-10 Years, But Inevitable Long-Term

Full Transcript

Today we're going to talk about the data centers that you guys are melting. The

the big theme of uh all of this is really this chart. This is the capex spend by the uh five hyperscalers on AI.

And as you can tell this is going top into the right and it's going top into the right fast. Uh to put this in context this is one of the biggest investments

that we are making. This is bigger than space, bigger than our highway system, our the Manhattan project, second only to the US defense budget.

And I'm so excited that we're going to break it down today with Chase Lock Miller, founder and CEO of Crusoe, who is arguably building it the best that we

know. A quick introduction for Chase

know. A quick introduction for Chase before we bring him on stage. Chase is

obviously the founder and CEO of Cruso as you guys know. less known fact and fun fact about Chase I learned very recently is that Chase is a very avid mountaineer. Five out of the seven

mountaineer. Five out of the seven summits across the world including Everest and Cruso is designed with mountaineering in mind. There's a plan

A, there's a plan B and then there's a plan C to the plan B. Chase, thank you so much for doing it with us. Please

join us.

Thank you.

Thank you. Thank you. Thank you.

Thanks for having me.

Thanks for joining us.

Yeah. So Chase, tell us a little bit about what is this data center economy we're in the middle of.

This is this is extra special for me.

You know, I was uh I went to grad school here at Stanford, so it's fun being on the other side of the table, being back on campus. So uh appreciate you guys

on campus. So uh appreciate you guys taking an interest in in in what I'm working on now uh here at Crusoe. But

you know, I I I think you know, in certain ways like the data center itself is like this physical manifestation of this boom that we're seeing in in AI adoption. you know, it's the physical

adoption. you know, it's the physical infrastructure that's required to to power the GPUs, to operate the GPUs, to run these big compute workloads that are

training new models that are fine-tuning models that are operating large scale inference workloads to to serve tokens to consumers and everybody that raised their hand today saying that they use

Gemini or Gemini Chatbt or Claude today.

So, you know, the data center is the physical infrastructure component that that that really enables all of this text technology to proliferate and uh change people's lives.

Amazing. Now, there's a lot of different components that go into the data center chase. You know, we've heard a thing or

chase. You know, we've heard a thing or two about compute. We've heard a thing or two about memory and power and interconnects and labor. And help us help us put into context all the

different things that go into it. How

should we contextualize it when we have our hyperscalers spend $650 billion building these data centers? Where does

that go and how much of that is uh going to you?

Um it's a good question. When I really think about like this infrastructure of intelligence and really you know that the starting point that I start start off with is like what does it take to

produce AI? Like you know everybody's

produce AI? Like you know everybody's all obsessed about AI. It's like what what does it actually what's what's required to make AI? And I sort of you know have this like basic equation right you know which is which is AI is the

combination of data algorithms uh so back propagation neural networks you know transformer architectures all these different algorithms that people have come up with to to essentially statistically model you know the the

data sets compute large amounts of compute and particularly this this high performance computing infrastructure through GPUs we're able to parallelize a lot of the workload workloads and do a lot of this tensor math and a high

performance architecture energy that's required to run those GPUs and then data centers, the physical buildings that house and operate all of this computing infrastructure. So what

actually costs money? Well, you know, data, sure, it costs money. I mean, you have to buy data. It's like uh opened up this new opportunity for a lot of data labeling companies. Folks like scale AI,

labeling companies. Folks like scale AI, folks like uh Meror, you know, uh even Handshake has sort of got has sort of bridged into this. um it's created a big opportunity

this. um it's created a big opportunity for people to make money by producing data that's useful for for uh for AI the algorithms right that that sort of sits

in a lot of the labs that are you know inventing new mechanisms and you know recursive learning techniques is a new trend and you know a lot of a lot of the different architectures that people are

using to make better use of the data but the compute energy and data centers that's really what Crusoe focus focuses on is like how do we actually build operate scale and make the best use of

all of this infrastructure and it tends to be actually the area where a lot of the money is being made. So uh or where a lot of the money is being spent. Sorry

that capex chart that you showed, right?

So I what I would say is like you know the name of my presentation I came up with was you know from electrons to tokens. So why are tokens actually

tokens. So why are tokens actually valuable? And given this is in the econ

valuable? And given this is in the econ department engineering. It's an engineering. Okay.

engineering. It's an engineering. Okay.

Well okay some of you have taken economics of course there's uh there's sort of like this this economics model uh for production called the Cobb Douglas

model. So like if you look at like the

model. So like if you look at like the growth in GDP, the growth in GDP is fundamentally the sum of like three key components. The the the the change in

components. The the the the change in labor, the change in capital, so like you know and this capital sort of includes both like physical capital like buildings and

plants and equipment as well as like capital that's invested into an economy.

And then the change in technology and you know technology basically makes labor more productive, right? I think is sort of a good way of framing this. So

why are tokens so valuable and why is this like a step ch why is everybody sort of making these huge investments?

this capex chart that you know Apor showed the the reason for that is that for the first time in history what we're able to do is we're able to actually create this sense of digital labor right

when you give when you give an agent a task when you give your claudebot a task of going to doing something it you know go create you know a CRM for my new

product that I just launched that is that is literally digital labor that's being represented and uh or that's being brought into the world and you Historically, labor is this thing that

you could really only change via the birth rate, right? And it's got a 20-y year lead time, right? It's a massive incubation period, right? You got to send them to schools and feed them and

house them and, you know, whatever. Like

all this stuff. I have three kids, you know? It's it takes a long time to to to

know? It's it takes a long time to to to sort of change Delta L naturally. But

for the first time in history, what we're able to do is actually change this delta L digitally through sort of the investment in buying data centers and buying GPUs and really accelerating the

growth in the economy by actually accelerating the growth in digital labor force. So, you know, that's kind of like

force. So, you know, that's kind of like the premise I want to get across here initially is that, you know, the reason these investments are taking place and the reason it's so broad-based is that there's an opportunity to completely

transform the economy by really accelerating the growth in GDP and fundamentally upleveling and improving people's quality of life by just seeing an unprecedented level of growth. So,

you know, I said Cruso sort of sits at like a couple different layers of the stack. Cruso as a business is a

stack. Cruso as a business is a vertically integrated AI infrastructure business. Um so we really try to you

business. Um so we really try to you know because this is a new category of infrastructure we've really taken the approach that we want to sort of be able to unblock anything that really gets in

our way of standing up this infrastructure of intelligence that's going to power the acceleration accelerating growth in GDP for the economy. So we sort of think about it in

economy. So we sort of think about it in like two phases. One is like the bottom layers of the stack which are basically energy development. So these data

energy development. So these data centers require lots and lots of energy and you know we're very thoughtful about taking this energy first approach in terms of going to areas that have access

to uh abundant lowcost energy resources.

You know, the next piece is really the data center piece, which is uh the actual physical buildings, which includes the, you know, the building, the plant around it, all of the chillers. Because when you think about

chillers. Because when you think about it, like from its most basic standpoint, a data center is a building that has power and cooling, right? And you can

plug in computers, right? That's that's

pretty much what it is. And when you're doing it at very very large scale, it it becomes very very complex and actually draws in you know the pinnacle of engineering and sort of like the from

across every single ecosystem from you know chemical engineering and cooling architectures and uh mechanical engineering and electrical engineering dealing with these high voltage you know power sources and like very high density

capacities. you know it it and and then

capacities. you know it it and and then also the the the the the uh computer science and the electrical engineering involved in in the chip architectures and how uh compute gets run, how data

gets transferred. Um it really is the

gets transferred. Um it really is the amalgamation and consolidation of like every form of engineering in like one giant building that operates intelligence for the world. Um so it's a it's a cool engineering problem.

Yeah. on this chase. Tell us a little about you know we hear the bottleneck in AI being compute four

years ago to power memory stocks are ripping right now labor then there's all these other components land powered shell etc maybe even regulation

give us an overview of for the time that you've been doing this which is just under a decade that seems to shift how has that traversed over time where is it today and where do you see it going what

is the the core bottleneck that is gating the growth of this like today the core bottleneck is like powered like energized data centers right like powered shells where you can

plug in chips uh and and start operating a big GPU compute cluster that today is the bottleneck but the bottleneck moves around a lot right uh sometimes it's getting power to those data centers sometimes it's um you know

individual components that go into building the data centers electrical equipment switch gear uh chill ers, you know, power gen chips. Uh, chips has kind of softened as the bottleneck. Uh,

you know, I think access to chips has, uh, become more available. It's really

finding places where you can put those chips and turn them on.

So, uh, that's part of the reason that Crusoe has taken this vertically integrated approach is that bottlenecks move around and being vertically integrated means you can do almost anything across the stack. You know,

we're not in the chip business. That's

like we're not in the chip business.

We're not in the model business. So, but

apart from that, you know, we sort of tackle, you know, most challenges throughout the entire ecosystem.

Right now, maybe just to follow up on that, Chase, you started Cruso with the core insight of Cruso Robinson Cruso with the insight that energy was one of

the most scarce resources at least in the western world. And tell us a little about the maybe pick a site, maybe pick Abene or one of the others that you can talk about. Why did you start there?

talk about. Why did you start there?

what was the scarce resource that you were solving for and why why work backwards from energy? Why not from from compute or or or memory or or as you said powered shell?

Well, I guess our our insight was that like you know like the markets are like reasonably efficient right when you when you sort of look at things and I think

when there was this steadystate growth in data center capacity uh as sort of the web 2.0 bubble or not bubble but web 2.0 So a trend unfolded and web

applications were increasingly growing and people were more online.

Um you know it became sort of this machine that was like standing up new data centers and they were happening in these big hubs right markets like Northern Virginia come to mind as like you know areas that run a large portion of the

internet.

I never wanted to be like a meto like I'm the next data center developer in Northern Virginia. I'm going to build

Northern Virginia. I'm going to build the next building there. That didn't

seem like an appealing way to you know enter a new market and really make a splash. So um we said look there's going

splash. So um we said look there's going to be these new types of computing applications that are far different from serving web applications things like uh artificial intelligence

training large workloads and you know back propagation things like you know digital currencies that require tremendous amount of computing power for proofof work consensus mechanisms those

require tons of energy and at scale energy becomes the bottleneck. So we

said can we find areas that aren't historical data center markets and go there and build uh data centers where we can instead of having to move the energy we're actually moving data um so you

know we could we could actually colllocate these areas so you know giving an example here and you know we'll talk about the top two layers of the stack both the deployment of GPU

clusters as well as uh how to actually monetize that and serve intelligence with managed services. Okay. So, we're

going to talk about the bottom two layers of the stack here. So, one first manifestation of that was what you see in this photo here. So, this is a site

that we've been uh working on since it started in June 2024 where we signed the first two buildings on the right hand side of the screen that you see. This is

at this point one of the largest AI computing campuses in the world. I think

it might be the largest. Our insight

here is that in Abene, Texas, many folks had never heard of until we put a shovel in the ground in Abene. So why do we go to Abene? Abene is this area of West

to Abene? Abene is this area of West Texas that is consistently very windy, windy, and very sunny. And so a lot of renewable energy developers had gone

there to build out uh large-scale renewable energy production because they were incentivized by uh something called production tax credits where they basically get paid by the government to

produce clean electrons. Um and they have to sell them to someone independent of sort of the price. And what that resulted in was actually an overinvestment in renewable generation infrastructure in this West Texas

market. And power prices were actually

market. And power prices were actually negative, right? because there was no

negative, right? because there was no marginal buyer for this power and there wasn't enough transmission to get the power to somewhere where it actually was useful. So we said great have we got a

useful. So we said great have we got a power hungry application for you? Um so

so we ended up you know working with the the city of Abene. If you look at uh sort of the the top there there's this uh this gray square and then below that like a bigger gray

square. Um so those are both

square. Um so those are both substations. Um the top one is a 200

substations. Um the top one is a 200 megawatt substation. The second larger

megawatt substation. The second larger one is a 1 gawatt substation. To put

those numbers into context, you know, like a gigawatt, like that gigawatt substation, first of all, that's the largest privately owned substation in the United States. It, you know, a gigawatt is, you know, I grew up in

Denver. Um, a gigawatt is basically what

Denver. Um, a gigawatt is basically what powers the whole city of Denver, right?

So it's like basically a city of Denver size worth of power to power computers. It is a very large amount of

computers. It is a very large amount of power in one single location and we were able to access it fundamentally because there was sort of this abundant lowcost energy in this market that was actually

having issues getting out you know having transmission to get out of that market created a massive opportunity for us to go in there and sort of build large scale uh cutting edge AI infrastructure.

Couple of follow-ups here Chase who is this tenant for could you tell us is this Chad GPT? Is this Claude? Is this

Gemini? So um the tenant of the C so the if you look at this campus there's there's eight buildings. So this is building one building two that is the substation I was just referring to and

then you have building three four five six seven and eight up there. So those

first eight buildings are all for Oracle and OpenAI. Um so it's basically what

and OpenAI. Um so it's basically what was uh known as project Stargate sort of this this first big project. So in order to help support this, we also built down

here a natural gas power plant. So this

is roughly a 350 megawatt natural gas power plant um to support the development and energize uh this giant computing cluster. It was also designed

computing cluster. It was also designed as to operate as one coherent cluster which means that all of the chips across all of the data centers were

interconnected on the same high performance back-end network to be able to operate as one coherent workload. So

you could run one training job that runs on all of the chips on the entire data in the on all the data centers together which is really really unique architecture to give you a sense of scale because

it's hard to see from a photo.

Mhm. Like we had to build this parking lot over here. Like that parking lot is like a it's a 5,000 car parking lot and you can see it's totally full because we

have, you know, we have roughly 9,000 people on site every single day that are working to bring this campus to life. Um

there is an expansion planned to the south of the the campus. You kind of see some of the dirt there. That is for Microsoft. So the campus is 2.1 gawatts

Microsoft. So the campus is 2.1 gawatts in aggregate. So again, two Denvers

in aggregate. So again, two Denvers worth of power to power all of this AI compute infrastructure that's going in here.

9,000 people. Chase, what is the population of Abene?

Uh 9,000.

No. Uh although there's another campus I'm going to show you that our on-site staff is larger than the population of the town.

Fascinating.

Uh this is the Manhattan project.

The population of Abene is 120,000. Um,

so we're able to source, you know, a lot of people initially from Abalene, but over time we had to actually create a lot of these labor and retention incentives to get people that would move

there to basically work as, you know, short-term construction workers. You

know, over time, in the long term, there is like a a steady job population that's operating these large uh computing clusters and power plants that are operating the infrastructure. that staff

is somewhere in the neighborhood of like 2,000 people just to kind of put it in context but um it still becomes a very very large job creator in this local economy um where the whole population is

roughly 120,000 people so go ahead no please while the audience is primarily engineering the class is the economics of AI right

so walk us through the metaphorical spend of a $100 so if he had a hundred bucks or x dollars of spend. What is the distribution of it across

the different layers?

Yeah.

Or maybe you were coming to it.

I'm going to come to it. We're going to start with just initially the power plant and the data center side and then we'll go to the compute clusters and then we'll talk about okay these are tens of billions of

dollars that are being invested in this how is anybody making any money like where does the return happen? So we'll

get to that kind of you know as I progress through the slides but the initial slide that Aporf showed was this huge capex spend right and Cruso a lot of those companies are Cruso customers

right we we help serve all those customers when they're building out these big capex investments and we help them build out this infrastructure of intelligence layer with that comes you know I I think

there's a bunch of different components that go into this but electrical equipment is a huge component of this right so think of if you look at the

uh there's like over here there's these small buildings with white roofs on them.

Those are called power distribution centers. They take power from the uh

centers. They take power from the uh high voltage or from uh the substation which comes in at a at a medium voltage

34.5 KV. Uh so 34,500 volts. Uh and then

34.5 KV. Uh so 34,500 volts. Uh and then it distributes it to that lineup. You

see this where it says transformers.

There are transformers that line the entire left side of the building and then the entire right side of the building and and what it's doing is distributing power to those transformers so it can step the power down from 345

KV to 480 or 415. Now that so that's like a big piece of like capex. It's a

piece of equipment, right, that we're sort of investing in that's going into building out the building. You also have all sorts of different cooling equipment. All of the mechanical

equipment. All of the mechanical equipment think of you have this lineup of chillers. They kind of look like RAM

of chillers. They kind of look like RAM but they're not. So those are all air cooled chillers which is basically these wound pieces of copper pipe where where you have this giant chilled water loop

in in the data center that's recirculating water and it you know basically goes from cool water on the inlet side. It goes into the rack of

inlet side. It goes into the rack of GPUs and there's a thermal transfer event from you know the GPU that's being energized and producing a lot of heat.

There's a thermal transfer event from the chip to the water and then the water goes out through the the through these chillers. You basically are blowing a

chillers. You basically are blowing a bunch of air over these wound copper coils to exhaust the heat out of the system so the water temperature steps back down and you have cold water to

then cool the GPUs again. So again,

that's another big investment. All of

the plumbing that goes into this is really really substantial. So we

actually we have a ton of plumbers and pipe fitters on site that are you know welding these big plumbing systems together. Each building has about 1

together. Each building has about 1 million gallons of water in the building to cool these chips. But again it's recirculating. So I think there's you

recirculating. So I think there's you know there's this ongoing narrative that like AI is like taking all the water. We

use like zero water. We fill this system one time and then on an annual basis we use about the same amount of water as a single family home. So it's a very limited water consumption usage which is

important in a market like Abene and West Texas where water is actually quite scarce.

So you can see kind of the costs here and I've normalized them on a per megawatt basis but you know things like power distribution centers the UPS system which is a battery system

uninterruptible power supply you need this to kind of smooth out the power as it gets distributed from the substation into the actual chip. other alternative,

you know, battery systems that we're sort of experimenting with, cooling distribution units. These actually go in

distribution units. These actually go in the data center itself and they basically take the water in from, you know, the the chilled water pipe and they distribute it to the individual racks of GPUs. So, and then of course

there's you have a lot of these core components that go into this. You know,

there was nothing here before. There's a

ton of steel. There's a ton of concrete.

We have our own batch plan on site.

We're actually making concrete on site with people pouring concrete 24/7. All

of the site work, all of the labor that really goes into making this happen.

It's tons of people, tons of man-h hours. And then on the power

hours. And then on the power infrastructure side, you know, we highlighted two things here. One is the gas power plant, which I showed you in the previous slide. You see a little snapshot of it down here on the bottom.

And then for some of the infrastructure, we have uh diesel generators. So you can kind of see these these three gray roofed buildings that are sort of in the middle there. Those are backing up power

middle there. Those are backing up power for the core network. Um so the way we've really thought about this problem is that not everything needs 100% 59's a

reliability full backup but the core storage and networking systems do. So

that in the event of a full grid outage, in the event of a disaster scenario, we'll still be able to access, you know, the storage systems, if there's a checkpoint that, you know, we need to ref reference and sort of move a

workload to a new location, we can do that. So anyway, it's a lot of money

that. So anyway, it's a lot of money that goes into this. And I wanted to give you sort of a breakout of like the full power plant plus the full power plant plus like building

costs. And I want to highlight something

costs. And I want to highlight something for you guys because, you know, I showed that really big parking lot, that 5,000 car parking lot. Well, the bottom piece

here is labor, right? So, this is 4.7 million dollars per megawatt, right? So,

that ends up being, you know, for a gigawatt or for, you know, a 100 megawatts, for a gigawatt, you know, that becomes 4.7 billion. Uh so this is

literally money that's you know being invested in people to do jobs right it's like literal it's it's a bluecollar labor force that we're investing in to basically bring this infrastructure like

it's it's a you know people working construction it's people on site that are building these things and so it's a very very substantial number when you sort of look at it in the in the scheme and that's an annual number so if

and so when you ask me about bottlenecks this is a bottleneck we don't have enough of these trades people we don't have enough electricians we don't have enough welders. We don't have enough

enough welders. We don't have enough plumbers. We don't have enough

plumbers. We don't have enough construction workers because this is one project.

There's many of these that are now cropping up. And you know, there's

cropping up. And you know, there's actually a huge competition for labor.

And so at Cruso, we're trying to reinvent how we think about bringing a lot of this infrastructure to life to be able to, you know, navigate some of these really critical labor challenges.

So, you know, if you look at like, you know, soft costs, what does that include? That's things like insurance.

include? That's things like insurance.

That's things like financing costs. Like

we we borrow money with a construction loan.

And then, you know, we have to pay we have to service the debt for that construction loan. You know, that's

construction loan. You know, that's things like, you know, sighting and all of the different, you know, work we're doing with commissioning. The next piece is the gas plant. And again, this is probably $2 to $3 million per megawatt.

I think what's important to realize about the gas plant is that uh those costs have gone up a lot.

Mhm.

There was a there's a small set of gas turbine manufacturers. You know, you you

turbine manufacturers. You know, you you basically have GE Vernova, Seammens, Mitsubishi Heavy Industries, Prattton Whitney, Caterpillar has a company called Solar, but but a lot of these companies have have been limited in the

amount that they've actually expanded production capacity. And so what's

production capacity. And so what's happened in a moment where everybody's trying to bring on new gas generation infrastructure to power their AI, you know, compute clusters, well, guess what? Prices have gone up a lot. So a a

what? Prices have gone up a lot. So a a gas turbine that used to cost a million dollars a megawatt now cost $3 million a megawatt. So uh you know th those those

megawatt. So uh you know th those those prices have grown and that's why you've seen I don't know how many people follow the stock market but if you look at the stock price of GE Vernova it's been good to be a shareholder of GE

Vernova. So anyway that that's like

Vernova. So anyway that that's like another piece of the stack. Uh the

tenant fit out this is this is all the stuff that's in the actual data hall. So

these are things like you know remote power panels, the uh hot aisle containment systems, the fan walls, the cooling distribution units, all of the stuff that you need in the actual data

hall where the GPUs go to actually energize and power the GPUs. The

electrical equipment again, you know, this is this touches all the different pieces from high voltage to low voltage.

This is things like, you know, power transformers, power distribution centers, medium voltage switch gear, low voltage switch gear. Think of like, you know, think of your electrical panel in your home that you, you know, if the

lights go out, you go down, you flip a few breakers. It's like that but at, you

few breakers. It's like that but at, you know, the scale of a city, like all in a giant electrical room.

Mhm.

You know, the mechanical equipment. So,

all of the equipment from, you know, the chillers and all of the plumbing, all of the air handling units and the fan walls that sort of cool this stuff and have mechanical systems involved. And then of course the materials, the steel, the

cement, all of the different components that go into, you know, sort of making one of these large buildings happen. So

anyway, that's kind of what the full stack looks like.

Couple of quick questions here, Chase.

Where are the GPUs?

Oh, that's on the next phase. Okay.

Yeah. Yeah. So we'll get to that. We'll

get to that.

So So this is roughly $20 billion per gigawatt. And assuming a gigawatt took a

gigawatt. And assuming a gigawatt took a year to come online, you would be paying 4.5 or $5 billion in salaries for labor per year for that's for the construction period.

This is like capitalized labor. So this

is like not opex capex.

Yeah. Yeah. This is not opex, right?

I I'll I'll show opex in like another great and one final question. Is this

number total 20 $20 $20 million per megawatt or 20 billion per gigawatt going down over time or going up over time? Because obviously some

time? Because obviously some competenc's so much demand. So things like ga, you know, gas generation, infrastructure, guess what? Prices have gone up. Things

guess what? Prices have gone up. Things

like labor, you know, if you're an electrician, like your your price has gone up because there's so much demand for your time.

Great.

Um, so all these things sort of you're seeing price inflation to varying degrees in each category of this infrastructure. So, I wanted to show

infrastructure. So, I wanted to show another cool project that we're doing just to, you know, say like how we're taking this, you know, energy first approach and to your comment around uh

how much how many people. So, today I think we have 3500 people at this site.

It's in it's in a town called Claude, Texas, which is like a hilariously poetic. We didn't name the town, but uh

poetic. We didn't name the town, but uh Claude is a town of 1500 people, right?

We have 3,500 people working on this project, right? It happens to be close

project, right? It happens to be close enough to Amarillo that we're able to tap into the, you know, a lot of the working population Amarillo. You can see it's an area that's very rich in renewables. It's one of the best places

renewables. It's one of the best places in the US to build wind because it's so consistently windy. So, you can see that

consistently windy. So, you can see that on-site wind farm that we have there um in the top of the screen there where there's a very large wind farm that's producing power that directly feeds into the data centers. We're able to firm up

the power with something we we call this across the meter. So basically the meter is the interconnection point into the grid and we have you have sort of behind the meter which is sort of like power that's on site but what we're doing is

what I call across the meter which is we have on-site generation through wind.

There's a plan to build solar batteries and gas. So all of the above energy

and gas. So all of the above energy solutions to basically energize this campus. What we don't need guess what we

campus. What we don't need guess what we can sell into the grid create an energy abundance that drops the cost for all local rate payers. And when we actually need power cuz you know we need to firm

up the power cuz we're either doing maintenance on some of the generators or you know the wind's not blowing or the sun's not shining. We need to firm up the power. We can draw power from the

the power. We can draw power from the grid. So it becomes this very mutually

grid. So it becomes this very mutually beneficial relationship of us investing in the power infrastructure and then leveraging the large distribution transmission and other generators across

the grid. So it's it's a very cool

the grid. So it's it's a very cool project. Uh I can't speak about who the

project. Uh I can't speak about who the customer is but it it is a very big customer for this location. You asked

about the GPUs. Great segue. All right,

who's making money in this? This guy,

you know, if you didn't know already, he's got a building right over here.

When we think about the IT capex per megawatt, remember I just showed you this is for the whole data center and the power plant was roughly call it, you know, 20 million a megawatt or, you know,

rounding up. When you look at the IT

rounding up. When you look at the IT capex, this is basically the the compute infrastructure that's going into the building, it's roughly 40 million per megawatt, right? And this is kind of

megawatt, right? And this is kind of like forwardlooking. This is sort of

like forwardlooking. This is sort of nextgen stuff. 30 million of that is

nextgen stuff. 30 million of that is going to the GPUs, right? That's why he always looks so happy, you know. I don't

know. He's always smiling, Jensen. You

know, the And then where does the rest go? 4 million roughly to the networking,

go? 4 million roughly to the networking, right? This is these are very complex

right? This is these are very complex networking systems, especially when you're thinking about the investment of interconnecting these GPUs together as one giant coherent cluster. You know,

when you look at the latest generations of GPUs, right here is the the GB300 or G G G G G G G G G G G G G G G G G G G G GB200, I'm not sure, but what what Nvidia has come out with there is actually a full rack design, right?

Which means that all of the GPUs, there's 72 GPUs in that rack, they're all interconnected on the same Envy link domain. Um, so you can kind of see the

domain. Um, so you can kind of see the the back copper plane there on the right side. They're all interconnected on this

side. They're all interconnected on this high performance back-end networking domain which enables uh AI researchers to do incredibly high performance tasks and and inc enables a lot of incredible

use cases to be able to share data across the the envelope mink domain but then you have to inter interconnect those racks together through sort of another high performance backend network

typically infiniband or you know sometimes rocky which is uh RDMA over connected Ethernet so that's where that 4 million per megawatt of spend is that green bar of networking. You know, the

next thing is is CPU CPUs and storage, right? What's amazing, what we've been

right? What's amazing, what we've been seeing recently is actually a massive shortage of CPUs. Why is that? With the

boom in all of these agentic workflows, with the boom in cloud, guess what? You

need a lot of CPUs to actually orchestrate those compute workloads. So,

you're seeing a lot of demand from, you know, everybody in the ecosystem to bring online a lot more CPUs. So, you

know, about 3 million a megawatt for CPUs and storage. And then you know there's a bunch of in the room capex which I think I might be double counting here but a lot of that TFO that I sort of re referred to earlier it's roughly

three million a megawatt and then you have about 1 million in sort of labor deployments shipping etc. But you get to this roughly number of about 40 million per megawatt. Yeah. Go ahead.

per megawatt. Yeah. Go ahead.

One question here. You know Jensen did a podcast yesterday with Dores where he looked not so happy. Lots of debate around compute being a commodity. Yes or

no? Is this number going down over time?

Is compute a commodity? What are you seeing in in in prices?

It's tough to say. I mean, it it's it's it's hard to say over how things look over the near-term, medium-term, and long term.

And you know, it's possible that both are right, right?

Where it's like I you know, I think I think and it depends on the use case too, right? Like we see like if you look

too, right? Like we see like if you look at older compute, it's going to commoditize as you further back, right?

Yeah, you know, I think one of the things that's very that's absolutely not a commodity is scale, right? So, if you do anything at really really big scale, super hard to replicate and it's super

hard to, you know, repeat. There's

always going to be a cutting edge. So,

any of the newest stuff is always going to command a premium and that's like been the history of the IT industry.

So, you know, we'll see how it kind of plays out. But I I do think that folks,

plays out. But I I do think that folks, you know, I I look, capitalism is a powerful force. The invisible hand of

powerful force. The invisible hand of capitalism is, you know, a very powerful force. So I do think that over time,

force. So I do think that over time, you know, margins do probably come down to like more standard stabilized silicon margins. Call it like I don't know 60%

margins. Call it like I don't know 60% gross margin, which you know today Nvidia is commanding like 80%ish gross margin, something like that. So

you know, I don't know. Competition is

powerful.

Yeah. Yeah. Yeah. Perfect.

So I think it's important to understand like you make this huge investment, right? We're talking initially call it

right? We're talking initially call it 20 million a megawatt for the data center and the power plant and another 40 million a megawatt, you know, to stand up this compute cluster. You just

spent60 million per megawatt, right?

So you you build a gigawatt cluster, you just spent $60 billion, right? How are

you going to make money, right? Like

what's the what's the what's the pot of gold at the end of the rainbow? you

know, people are buying this infrastructure to support all of these AI applications to serve tokens to customers. And you know, I think the

customers. And you know, I think the reason I wanted to show this chart is actually this is a Bloomberg chart of basically the H100 spot pricing. And you

know, I think there's this question of how valuable is all this equipment and how what timeline do you depreciate it over, right? How

what's the useful life of all this equipment? And I think people said, you

equipment? And I think people said, you know, the next generation's going to come out and then, you know, this stuff's going to be completely useless.

Well, this chart tells the the exact opposite story, which is that for H100s that initially debuted in uh about 3 years ago, the pricing had come down,

but with these boom that we're seeing in demand coming from agents, the price of H100s actually come up and actually exceeded the price that folks were paying when these chips first came out.

And that's like something we're experience experiencing firsthand on the ground in our uh Cruso cloud business.

Does the ch tangible outcome of this chase you know right now most public companies depreciate their compute over five years. Does this imply

five years. Does this imply six is the standard.

Does this imply it goes longer than than than six?

I don't know. You know is like my honest answer. It's like we're going to use

answer. It's like we're going to use compute so long as it's valuable to us or to someone else. So long as like we and and part of our strategy has been

building services that abstract away the layers of compute. So you don't know if you're using an A100, an H100, a, you know, uh, an MI300, nor should you care.

What you care about is the actual service that you're getting from that.

Just like when you log in to Zoom or Google Meets or Teams, you're not thinking about like, wait, is this like an Intel Ice Lake CPU or is this like an

AMD? You know, it's like you don't care

AMD? You know, it's like you don't care what the chip is that's running. You

care about the service that you're getting and the fact that you're able to log into this video chat and be able to hear and speak to the other person on the other side.

Makes sense. So you know we think the application scaling is really going to like abstract away a lot of the core you know infrastructure and you know I think we'll see how valuable stuff is over the

course of time but I think I think it's probably longer and then this is a similar chart that semi analysis um who many of you are probably familiar with uh published and this is for blackwells right so we kind of see the pricing for

blackwells following a very very similar trend with the uh sort of this agent breakthrough in end of here. So when we look at this as like a holistic picture, right? We're bringing it all together.

right? We're bringing it all together.

We have the data center, we have the power plant, and we have the the chips, you know, the upfront capex you're looking at is close to 60 million per megawatt, right? Um again, I think I double

right? Um again, I think I double counted something in there just this is the first time I'm going through these slides. So, uh but it's roughly 60

slides. So, uh but it's roughly 60 million per megawatt. And then when you look at the ongoing opex of this plant, it's roughly, you know, it's a little over 1 million per megawatt. It's

actually pretty limited opex and this is for things like your power, your insurance, some of your labor on site that's like repairing and replacing cables and GPUs that fail and you know a

number of other things but you know call it like one to $2 million per megawatt.

So what is your revenue if you're just renting out those chips and I was using that chart before to show you like you know rough pricing that you could rent an H100 for. What is your revenue per

megawatt? It's roughly 15 million per

megawatt? It's roughly 15 million per megawatt. Mhm. Right. So, you know,

megawatt. Mhm. Right. So, you know, you're making this upfront capital investment of 60 million a megawatt.

You're getting 15 million a megawatt in annualized revenue for just renting access to the infrastructure. Now, how

does this become a good business? I I

think a lot of it comes down to how do you measure the depreciation of all the different bars that make up these capex numbers? And I think that's the critical

numbers? And I think that's the critical question analysts on Wall Street are asking is like how long is this building going to be valuable for? How long is this chip going to be valuable for? How

long is this, you know, power plant going to be valuable for? Like what's

the right depreciation curve?

So, you know, but you're looking at, you know, from this, call it a four-year payback period for, you know, this this huge investment on a revenue basis. And what are the rough Well, this is what I'm saying. You have

opex, stripping out the opex, whatever.

Great. But then there's other labor that's not included here which is like all the engineering workforce etc. So there's there's more opex than this that's not you know again this was put together this afternoon but roughly four years. Yeah.

Yep. But what's another way to actually improve the the value that you're actually delivering to customers? Again

you know I sort of spoke about this vertically integrated strategy that Cruso has when we deploy chips. One

product that we offer is sort of you know that manage compute cluster right for the engineer or developer that really wants to manage the infrastructure themselves, manage the compute nodes, run a big training

workload, interact with, you know, individual virtual machines or a large managed Kubernetes cluster of of of compute. But for folks that actually

compute. But for folks that actually just want to interact with the model, right? If you're actually hosting a

right? If you're actually hosting a model, again, the the title of this slide deck is from electrons to tokens, right? So getting how do you get to

right? So getting how do you get to tokens and where's the value uplift that you get from that when you add in sort of this managed services layer where you're actually serving the model you're

hosting a model and actually providing an endpoint for a customer to basically you know hit hit that API endpoint and that and actually serve those chat GPT or you know those anthropic queries that

everybody's sort of you know sending on their phones or or laptops you end up improving the margins quite a bit you know adding call it another 15, you know, it's anywhere from like whatever 5 to 15 million per megawatt. So, you end

up with like, you know, in a very optimistic case, call it $30 or 30 million per megawatt per year. So, you

end up with a two-year payback. That's

that's a dramatically better output. Um,

makes sense. Perfect. If you have no other slides, I might uh roll you through a couple of questions that uh that the class had and then open it up for questions.

So, go ahead. Go ahead. Did you have more?

I It's fine. I I uh this is just some pictures of some deployments that we had that I was just kind of showing black walls and whatnot and I'll talk about the actually the

inference scaling and where we actually you know to try to bring down those labor costs. Cruso actually

designed something we call Cruso Spark, which is our modular self-contained modular AI data center that we manufacture in these centralized locations where we can bring down the

labor cost and we can actually bring down the infrastructure cost quite a bit. It's like call it

bit. It's like call it you know 30 to 50% savings depending on you know overall cost. So like you know that 19 million a megawatt we can actually bring down pretty dramatically.

What's the capacity in terms of size? Is

this a gigawatt or a couple megawatt or this? So each unit for our air cooled

this? So each unit for our air cooled architecture is 500 kW.

Got it. So bud

and uh you this is the air cooled design and I have a video here actually of them deployed in the field. I don't know if this is going to work but no but okay well doesn't matter. And then we have a liquid cool version that's that's 2

megawatts but you can deploy them in fleets right which actually opens up a lot of net new power opportunities which is a pretty neat solution.

Amazing. Amazing. Well, in the interest of time, I can't think of a better person to ask the question I'm about to ask you. You have seen probably every

ask you. You have seen probably every layer of the stack from uh chips to power to gas to labor to to to networking. If you had to pick a layer

networking. If you had to pick a layer of the stack and in particular a company and in particular a stock, what would you go long, what would you go short,

and why?

I hope everybody's taking notes. Would I

go long or would I go short?

I will call you in a year from now.

Yeah.

See how that did.

Man, that's tough.

Other than Cruso, of course.

Yeah. I mean, I'm turbo long Cruso. Uh

but uh I I Man, I I do think that it's, you know, oftentimes getting these things right is is difficult when you

look at like the the time horizon. I do

think that, you know, my my bare case is actually, you know, there's that huge investment that I showed across the uh the electrical stack.

Um there's so many components that go into the electrical stack because what you're doing is you're taking power from this high voltage substation that maybe powers being at 345 KV. So, you know,

there's this new line that's going in Texas at 765. So you know it's very very high voltage power that then goes through this transformation process where you're stepping it down to medium voltage you're stepping it down to low

voltage you're distributing it tons of cable tons of stuff I think the data center is fundamentally going to drive a lot of innovations in the whole electrical stack

and leverage a lot of you know solid state electronics and solid solid state transformers and power electronics and I think it puts in jeopardy a lot of these

companies that fundamentally have not innovated that much in like the last hundred years.

So, you know, this is like Eaton, Schneider, a number of other companies, which I think will and and the reason I say it's very difficult to put a timeline on this is that those companies I think will do very well in the near term.

They're on the critical path right now.

They're going to do super well in the near term and like they're big partners of mine. So, I hate saying that I'm

of mine. So, I hate saying that I'm negative on that, but uh over the long term, if they don't innovate, if I I think that whole piece of the stack is going to dramatically come down in cost

because of innovators that are building out this next version of the electrical stack. And there's going to be huge

stack. And there's going to be huge shifts to like 900 volt DC and all these different aspects.

So, that's an opportunity for the electrical engineers in the room.

The electrical engineers in the room.

Absolutely. I think it's a huge opportunity. power electronics like how

opportunity. power electronics like how do you how do you get power from 765 KV to like 900 volt DC in the rack? I think

like innovating on that problem is like a super super huge opportunity for for for for people.

Okay.

Um any any picks on the long side but if not I have another question for you.

Um I mean I'm like bullish so many things. I I

things. I I may maybe some other thing on the short side. I do think that open source is

side. I do think that open source is winning. Open not open source is winning

winning. Open not open source is winning but open source will do well and take more from sort of the uh source closed source model place. Yeah.

Fascinating.

Elon space data centers.

Yes.

Bullish bearish real fantasy happening in our lifetime or not data centers in space.

So I I'm like actually I'm very interested in this and you know at Cruso we've we've established a partnership actually with another player in this ecosystem called StarCloud, right?

Um that's actually launched the first H100s into space. There's a lot of things to like about it, right? I just

walked you through this whole stack of areas that I'm spending billions of dollars, right? All the concrete

dollars, right? All the concrete foundation, guess what? You don't need that in space. All the permitting, all the approvals you need on the power side, guess what? you don't need any of that in space.

You know, a lot of the core networking pieces, what I didn't get into is like the millions and millions of strands of fiber um that go into one of these data centers and all of the, you know, technicians that you have to have to,

you know, plug all this stuff in. Guess

what? In space, you use optics for everything. So, everything is basically

everything. So, everything is basically optically interconnected. And that's all

optically interconnected. And that's all like very interesting. It's also very hard.

I think the thermal management piece is very challenging. And I also think the

very challenging. And I also think the ongoing operations piece is very challenging. So like in these data

challenging. So like in these data centers when you're operating these big large scale, you know, interconnected compute clusters, things fail, right?

GPUs fail. They have to be receated in their compute tray. Uh sometimes they have to be armade and sent back to the vendor. Like guess what? You're not

vendor. Like guess what? You're not

sending an astronaut into space to like take a chip and send it back to Jensen, right? That just isn't going to happen.

right? That just isn't going to happen.

So So you're going to have like a natural deprecation that will like create challenging economics. And then,

you know, I mean, a lot of it rides on like does Starship fundamentally, you know, cost of payload.

Yeah. Yeah. Does payload cost come down by two or magnitude? I don't know. I

mean, he has a better idea than I do on that. My philosophy on this is probably

that. My philosophy on this is probably not material in the next 5 years and probably not material for 10 years, but I think over a longer period of time, I think data centers in space are

going to play a major role in sort of the future of uh intelligent infrastructure. In a couple of weeks,

infrastructure. In a couple of weeks, the uh SpaceX S1 is going to be available for everybody here to read and so we'll see what his time estimate is.

We know your time estimate next year or something.

Next year. That's right. That's right.

And final question, you know, you're a Stanford alum. If you were here right

Stanford alum. If you were here right now, what advice would you have for students who are making decisions about what to study, where to focus, uh, and know no tougher time than now to make

that decision? I kind of have this

that decision? I kind of have this philosophy that like it's not like like the exact things you learn in school like kind of don't matter that that much. It's like, you know, I don't

that much. It's like, you know, I don't want to like be disparaging to that or whatever, but like, you know, they're important, but that it's it's it's more like this process of learning. And like in my

experience, it is like, you know, like one of our one of our core philosophies, you know, one of our core values at Cruso, you talked you talked about one earlier, which is thinking like a mountaineer. One of our

other core values at Cruso is actually living on the infinite growth loop, right? This notion that nobody's a

right? This notion that nobody's a finished product. Everybody's a work in

finished product. Everybody's a work in progress.

If you can get better, if you can learn more, if you if you if you have that tenacity to to know how to improve yourself every single day over time, you get this exponential compounding, which

is like really the most valuable asset that any of us can have. So really it's about like investing in the process of hard work, of grit, of grinding and then actually leveraging a lot of the tools

because you know I don't know what the world's going to look look like 5 years from now with like the mass adoption and utilization of AI where you know we all have the workforce of a million people

at our fingertips, right? It's

fundamentally going to change work. It's

going to change the way everybody you know operates. So again, I I would focus

know operates. So again, I I would focus less on the what and I would focus more on the like the how and you know leveraging of of AI tools to like run

and you know live your life I think is uh you know the advice I'd give to students.

Awesome Chase. Thank you so much for doing this.

Yeah. Thank you.

Loading...

Loading video analysis...