Coreweave: AI Bubble Poster Child Or The Next Tech Giant? — With Michael Intrator and Brian Venturo
By Alex Kantrowitz
Summary
Topics Covered
- Part 1
- Part 2
- Part 3
- Part 4
- Part 5
Full Transcript
Is AI a bubble or the biggest boom of our lifetimes? The fate of one company,
our lifetimes? The fate of one company, Coreweave, may tell us everything we need to know. We'll be back with the company's founders right after this.
Welcome to Big Technology Podcast, a show for coolheaded and nuanced conversation of the tech world and beyond. We have a great show for you
beyond. We have a great show for you today because in studio with us are the founders of Cororeweave. Cororeweave CEO
Michael Intrader is here with us.
Michael welcome.
>> Thank you very much. Great to be here.
and Corweave's uh chief strategy officer Brian Venturo is also here. Brian, great
to see you.
>> You are uh you both are running um one of the most fascinating companies in the AI boom. Everyone has used you
AI boom. Everyone has used you effectively as a Roshark test to read in their beliefs or insecurities about uh what's going to happen in this AI
moment. Some people think that you're
moment. Some people think that you're the poster child for the AI bubble.
Others think that you're perfectly positioned uh to take advantage of the boom in building that is occurring as demand uh goes through the roof. A
couple stats about you. As of today, uh the company is worth $42 billion uh after an IPO earlier this year. You've
built uh eight new data centers across the US in the third quarter alone. And
uh the latest reported numbers have you uh in possession of something like 250,000 of Nvidia's uh GPUs, which are
the chips that companies use to run uh AI models and grow them or train them uh as they like to say. Let's just start off with this because it's been heck of
a ride for you over the past couple years. What has it been like being on
years. What has it been like being on the front lines of this AI buildout?
talk a little bit help people feel it the speed at which it's boomed and what it's taken to do something like build eight data centers in a quarter.
>> It's exhausting.
>> All right. So, let's start with that.
It's been exhausting.
>> Yeah. I it it's um you know, you hit it dead on, right?
Like it it has been um incredibly exciting. Um it has been an unbelievable
exciting. Um it has been an unbelievable year. I mean, we just we just IPOed uh
year. I mean, we just we just IPOed uh um really eight months ago, and it feels like it's been two lifetimes. Um the
company is uh moving at incredible speed. Um we are um um building uh a
speed. Um we are um um building uh a massive uh percentage of um uh the global AI infrastructure that's required
to allow artificial intelligence to be what it is. And when I say massive, it's, you know, like a meaningful percentage. Um
percentage. Um >> what's your estimate about the percentage?
>> That's tough. Um you know look um >> a lot >> a lot is you know we we don't >> we think of we think of ourselves as
providing enough of the compute that um that we have the ability to be relevant in the debate of how AI is going to be built and how it's going to run into the
future. And so um we don't know what the
future. And so um we don't know what the numbers are. you know, it's there's
numbers are. you know, it's there's there's lots of different providers of technology. They're being used and
technology. They're being used and there's no real good way to kind of put your fingers on the data, but you know, meaningful, right? And that that's an
meaningful, right? And that that's an exciting place to be. Um, and it's honestly, I mean, we talk about this in the company all the time. It it it's a
privilege to come into work and focus your energy, your your creativity um uh uh every day on
building a component of uh this this uh um uh of artificial intelligence which is the issue of our time in many ways.
And we get to really sit there every day and pit ourselves against those issues.
Um which is great. I mean, I have a ball with it.
>> I'm taking a shot of this. Hold on.
Before we move on. Um, I I think that that's really around, let's call it, the practical side of it, right? And when
you're a company growing as fast as we have, where we had maybe 100 employees three years ago, now we have 2500 employees or so. Um, there's an emotional side of this too, right? And
you know, sometimes since the IPO, we've been under this spotlight in the world of like what are they doing? How are
they doing it? Are they executing or do are they doing this? And you know internally we always set the highest bar for how how fast can we do something how high of a quality can we do it at. Um
and you know as this industry has expanded so rapidly like there are things that happen right and you know you have weather that impacts construction on a project. You have a
truck that hits a bridge. Like you have all of these random exogenous or idiosyncratic things that happen in a supply chain and then it comes back to us and it's like the world is like wow
you failed right and inside the company from a culture perspective it's been so important for us to manage like listen we're doing something at a scale no one no one's ever done before at a speed no
one's ever seen before. Of course,
things are going to go wrong, but take perspective. Like, see how much we've
perspective. Like, see how much we've done, right? And for our employees, it's
done, right? And for our employees, it's if you're moving at a million miles an hour and you hit a speed bump, it's okay, right? It doesn't change the
okay, right? It doesn't change the trajectory of what you're doing. It just
like it just provides the battle scar so it doesn't happen next time.
>> Yeah. Now, I can imagine it's a rough and tumble uh world trying to build this with very demanding customers, very important technology that you're deploying and the speed is crazy. I mean
it is interesting looking at your founding story you really started uh working on providing infrastructure for crypto uh was it like Ethereum mining or
something like that and then um pivoted in a very smart way to uh this AI moment uh establishing a relationship with Nvidia that's we'll talk about that
that's proven uh to be to be very useful and helpful for you and probably for Nvidia as well um and and now you're again hyperdrive uh building uh data
centers and the data centers are if I have it right um largely licensed uh or or the capacity is rented out mostly the tech giants I mean the core customer is
Microsoft something like twothirds of the demand according to your your public filings uh is Microsoft but there are others as well >> so so uh we actually uh spoke to um to
company uh a customer concentration in our last earnings so we can kind of there's no uh customer that represents more than 30% of our backlog. And so
we've done an incredible job. It's been
a focus of the company uh um everything from sales all the way through the the build cycle to really begin to to uh broaden uh the reach with which our solution touches artificial
intelligence. So, Microsoft is an
intelligence. So, Microsoft is an important customer um uh and a large creditworthy um and formidable uh part of the AI uh ecosystem at large, but
they are um you know, we we've done a really good job bringing on other wonderful clients, wonderful customers um that are going to continue to kind of uh use our our solution as they as they
build their products and deliver them to market.
>> Okay. And I definitely want to get into customer concentration in a little bit.
So, um, but that's a good, uh, preface to what we'll touch on and already some new new data to me. So, good to hear that. Um, but I wanted to again like
that. Um, but I wanted to again like just get into what it what it takes to build these things uh, these data centers. Um, you're you're assembling
centers. Um, you're you're assembling them with incredible speed. U, so I just want to hear a little bit about like on the ground uh, what does it take to put
together uh, these data centers? So the
um historically um you know let's say two years ago we were able to go out and buy capacity or lease capacity that was much further through the development cycle right they were
basically the shell already existed. It
was a fit out construction process which means going in and installing like the last last pieces of the cooling infrastructure cabinets conveyance for all the cabling all the hundreds of miles of cabling we have in these
things. Um, but it's shifted over the
things. Um, but it's shifted over the past year is that now we're doing much more uh bespoke in-house design, right?
To make sure that we're meeting the needs of what our customers deployment is going to be, right? So, it's
everything now from okay, how is the cooling and electrical distribution designed? Um, how are we ensuring
designed? Um, how are we ensuring electrical redundancy and reliability?
Uh, you know, how are we cooling the air cooled side of these things? Because you
have liquid cooling, but there's still a component of it that has to be cooled with air.
>> Can we pause on that? Sure.
>> These chips run extremely hot, right?
extremely hot >> cooling. People talk about cooling for
>> cooling. People talk about cooling for those people who are coming to this for the first time. Being able to run an AI den data center, you got to be able to cool the chips if you want to be able to be successful.
>> So long term.
>> This is one of the things that um I think the market misunderstands, right?
Is that everybody believes that this that there's some differentiation in the plumbing of the liquid cool data center, right? That's not where the
right? That's not where the differentiation lies. It's all the same
differentiation lies. It's all the same uh pipe and valves and fittings like everyone's using the same things there.
The differentiation comes after you turn it on and how you control those systems. >> Okay.
>> Right. And that's what we've done incredibly well as a company that we've very consciously not spoken about externally for the past couple years because it is our secret sauce is how we
provision, validate, and manage those data centers all the way from the power cooling infrastructure up through the GPUs, the servers. And it's why the like the most valuable companies in the world, the biggest AI labs actually use
us to run their most critical training jobs, >> right? I mean, it's a herculean task,
>> right? I mean, it's a herculean task, right?
>> It's important to understand that when when you're when you're thinking about the ecosystem, right? Um and you're thinking about the different Neoclouds that that that populate the >> What's a neoc? So,
>> the worst term ever. I hate it.
uh think of it as like um you know in the uh the common vernacular you know everybody knows who AWS is you know Amazon they know who Microsoft is they know who Google is those are the
hyperscalers right um you can throw Oracle in there if you'd like uh but then there's a uh a class of uh providers that can deliver this
infrastructure um and you know we are the leader among that and what is important to understand that if you took all of the other uh Neoclouds and added
their GPU fleets up, we would still be a multiple of all of them combined in terms of the number of GPUs that are up and running and delivered to clients.
And so large multiple >> um what what when when when when Brian is talking about um you know um you know things that the market is struggling to
understand it it it is it's important to understand that what differentiates us what allows us to be as successful as that we have is that the software suite
that we have built allows us to take the commodity GPU and deliver a decommoditized premium service that allows people to extract as much value
from this infrastructure as possibly can be extracted and that's really what Coreweee is doing and it's why when when Brian says hey you know the leading companies in the world and the leading
uh uh labs in the world are relying upon us to deliver our service that is why it's because the the product that ultimately um they receive is the product that will allow them the
greatest probability of being successful at using the GPUs to deliver the the products that their company is building, >> right? So, just to put it in plain plain
>> right? So, just to put it in plain plain English, always helpful for me. When a
company uh like a Microsoft will work with you on on building infrastructure for artificial intelligence, you've built uh some proprietary uh pieces of
the puzzle, like your cooling system, like the software that runs the data center, and that allows them to get more out of the chips than they would have typically.
>> Yeah. And the the the nuance here is that when you build one of these data centers and it has 3,000 miles of fiber optic uh cabling and it
has a million uh optics that connect into the switches, like these things all fail, right? And when they fail, um the
fail, right? And when they fail, um the way that training jobs are run today is if one component fails or one component limits the performance, the balance of the training run is going to be governed
by the worst performing component. Oh,
>> right. And our entire job is to build the automation, the predictive analytics, the you know the machine learning models around saying okay we're seeing a problem here. How do we gracefully handle these things so it has
the least impact on our customers jobs right and that's the core we've secret sauce >> okay >> is that we have the world's largest data set of how these things run how they fail and we've built all the recovery
mechanisms and the software intelligence to help our customers run these things.
is the demand that you're getting from your customers uh you mentioned you know training uh very well u is it mostly
training the AI models uh because well that's that's what a lot of the infrastructure has been used for build scaling these models throwing more uh compute at them throwing more data uh
making the models bigger and and then the idea is that the models get better so are you seeing most of your demand in the training side of things or has it gone to inference where like companies
are actually using the models uh and deploying them into production.
>> It's a great question and I think it it talks to the split or this kind of delineation of where the market's been for the last three years and where it's going. Um you know our customer base for
going. Um you know our customer base for the last three years has primarily been the largest AI labs and enterprises that are building the capabilities of AI, right? And it's now shifted from the
right? And it's now shifted from the people building those capabilities to the people that want to use those capabilities to change business outcomes. And this is where all the
outcomes. And this is where all the enterprise adoption is coming from. Um,
you know, it's, uh, one of my favorite services out there is Lovable, right?
You go to Lovable, you can build any app you want. There's a chatbot that helps
you want. There's a chatbot that helps you go through it. Um, you know, we're finally starting to see people chain together these capabilities to build real products that solve problems. And
our business for the last three years has really been around the the creation of those capabilities and has very quickly shifted to include not just the creation of them but the deployment of them and use in business practices. All
right. So um one of the things that I didn't expect was that uh what looked like training two years ago is how inference was going to look today.
Right? Is that you're still dependent upon uh highly connected storage. um you
know your backend networks become critical to this because the models are so large. So there's really no
so large. So there's really no difference between training infrastructure we deploy to build those capabilities and what our customers are ultimately using to serve them.
>> So has has inference overtaken training for you?
>> Uh we serve a tremendous amount of inference. Um but you know
inference. Um but you know >> I actually don't know the answer to that. I
that. I >> 6 months ago I would have said it was 2/3 training and one third inference.
It's probably close to 50/50 now.
>> Okay. Um, but there's also some of our big customers that they go from they'll use a campus for training, they'll launch a new product, they'll have to spill over for inference. Um, you know, a lot of this is very dynamic and it's
been built to be so.
>> Yeah. I I uh this may provide a segue to some of the other subjects that that you'll ultimately get to in the in the in this podcast, but you know, for me,
watching inference, understanding that inference is the monetization of the investment in artificial intelligence is one of the most exciting trends that exists within
AI. And we have a front row seat across
AI. And we have a front row seat across the entire cross-section of uh almost every uh um large important lab that's
building this stuff and watching them increasingly you know move from let's say uh you know one-third inference climbing towards uh you know 50% and at
times it's even over 50% of uh the fleet being used for inference you know is is just an amazing uh um uh indic indication of the scale of the demand to
use artificial intelligence to serve customer inquiry >> and that means everything.
>> All right, one more question about this.
>> Y >> why does why does Coree need to exist?
Why I mean we're talking about these big companies like Microsoft like why wouldn't they just build their own data centers? Why are they licensing it from
centers? Why are they licensing it from a third party?
>> So it's a great question. Um
there was a void in this market, right?
And there's a couple pieces here. Um,
the biggest clouds in the world today are built off the cash engines of peripheral businesses, right? Google's
built on search. Amazon's built on retail. Microsoft was Microsoft was
retail. Microsoft was Microsoft was built on enterprise software. Uh, we
came pretty much out of nowhere, right?
And our the the moment in time for us to be able to get ourselves into this position was driven by crypto, right?
You mentioned earlier that we came out of, you know, Ethereum mining. um we
were able to leverage the revenue from Ethereum mining to go out and build and deploy additional scale so that when crypto went away we had the infrastructure in place and we hopefully
had enough clients that we became like we were at escape velocity right so um you know we recognized that compute was going to be valuable we didn't necessarily know at the time what it was
going to be valuable for like I don't think Mike and I ever had this idea of like there's going to be this hundreds of billions of dollars a year in capex for AI high. But um you know we had the thesis that compute is uh going to be
incredibly valuable and we wanted to own a lot of it and we looked at that compute resource as an option like and we said okay what are the best things that we can do with this and that's how we've always approached different
business problems right is like what is our asset how do we monetize it the most effectively what's the most valuable way to use this >> but then so so so I'm gonna jump in here on this but I want to go back to
something that we kind of talked through as we started this right is that Like we've built a software stack from the ground up to optimize for the use cases
associated with parallelized computing.
We do it better than anyone else. The
reason we exist is because we deliver a fantastic product that is highly in demand >> and incredibly differentiated >> and incredibly differentiated. And so um you know we we serve the largest players
but we also serve you know a ton of other uh AI uh companies that are building applications where they have the choice to go and use us or to go and
use uh one of the hyperscalers and many many many of them uh choose to use our solution because it allows them to more effectively deliver compute and one of the things that's really just lost on
this is that there's not an understanding of how fundamental the change from cloud 1.0 into cloud 2.0 as you moved from you know uh sequential
computing into parallelized computing.
And when you when you made that leap right from you know um you know hosting websites and and and data links into driving parallelized computing for
artificial intelligence. It stands to
artificial intelligence. It stands to reason that a fundamental change in how compute is used will also require a fundamental change in how you build the
cloud to serve it. And we took advantage of that transition to build best-in-class solutions, right?
>> And that's why we exist.
>> So, uh I've heard an argument made that basically the big tech companies um you know to build these uh these data centers, they have to forecast demand
out years in advance. It's a massive capital commitment. They're not sure
capital commitment. They're not sure whether it will pay off and coreweave is useful to them because you're taking the risk and then they will be able to use your capacity and sort of rent it out as
opposed to having to make these big investments on their own and you know it's it's their ass is if things go wrong.
>> Yeah. Look uh you know that that is a uh that is a narrative. Um I don't think that actually tracks with the reality of the situation. I think the reality of
the situation. I think the reality of the situation is is the large hyperscalers are building as fast as they can. Uh Google went out and just,
they can. Uh Google went out and just, you know, released a press release where they're building $50 billion worth of infrastructure while they're still buying from everyone else they can. Uh
Microsoft is building internally and they're buying from uh from from lots of other players. What what
other players. What what I I feel like that that argument is model fitting, right? it is somebody's got a preconceived notion of what this is going to look like and now they're
reconstructing the factual the facts on the on on the ground to fit that model so that they can say look I'm right but the reality is is that um I look at it
very differently right I look at the way that we built our competitive uh advantage over you know the hyperscalers the way that we built our competitive
advantage over other uh neoclouds and the way that we did that is we understood that this type of computing was going to be important and we built
the infrastructure and the software to be able to serve it when the demand emerged and we did it in a very riskmanaged way. When I look at the
riskmanaged way. When I look at the future, when I think about uh um the the the investments that go into building a an AI factory and I think about how much
money is being put into the data center versus how much money is being put into the compute that goes inside of the data center, I think about the data centers as being basically an option on being
able to provide and be relevant for the delivery of compute into the future.
Right? We take our risk dollars as a company and we invest in the long poles and the long poles are really twofold.
One is building the best software in the world and the second one is having access to the data center capacity to be able to deliver compute when a wave of
demand hits this market that requires you to deliver it. You can't just wake up and say, "Hey, I want to deliver a gigawatt worth of uh infrastructure."
What you have to do is you have to start years in advance building that gigawatt of infrastructure so that you're in a position that when your customers say, "Hey, I just produced a new way of using
AI that's going to require a gigawatt worth of infrastructure." You're able to serve it. We're going to have a
serve it. We're going to have a tremendous portfolio of infrastructure that is going to be able to be deployed into the future and we're really excited about that. We think it's a wonderful
about that. We think it's a wonderful way to go about building our business, >> right? And and that's the question about
>> right? And and that's the question about the bet, right? is that um you're betting that AI is going to continue to be adopted at a wild rate.
>> That's not entirely accurate. Okay.
>> What what we are doing is we are making the majority of our uh investments by taking
long-term contracts from creditw worthy entities using those contracts as a way of raising money to build the infrastructure where the demand and the
credit and the capital has already been uh um secured. Right? So let's say 85%
of our exposure is to deliver compute to investment grade or AI labs or other large consumers of compute. Right? The
other 15% is our exposure to long-term contracts to be able to do that exact thing in the future.
>> And that's the way I look at it. And I
think it's a much better way to think about how we're taking on risk, how we're dealing with leverage, and how we're positioning ourselves. If the
market continues to grow, we're in a great position. If the market stabilizes
great position. If the market stabilizes in and around this, we're fine. If the
market contracts, there's some new technology, then we will be left with some portion of that 15% that we may be uh in a position where it
has to wait for a few years before the market grows back into it. And we are fine with that. We think of it from and and you know people have talked about how the founders of this company kind of
look at the world with a different lens because we don't come from Silicon Valley. You know, we come from the
Valley. You know, we come from the commodity space. We come from Wall
commodity space. We come from Wall Street. We think about option value,
Street. We think about option value, right? When when we think about compute,
right? When when we think about compute, we think about what is the option value associated with it. When we think about the data centers, we think about what is the option value to be able to build to be relevant in the future. And that's
the way we kind of go about allocating our risks and securing the contracts that we have in place right now.
>> Yeah. And and you know to speak to one thing here you talked about if the market contracts um I think that we would love for that because it presents tremendous opportunity for us.
>> How right I mean you you're in a position where there's going to be distressed assets. There's going to be
distressed assets. There's going to be consolidation uh possibilities like that's when opportunity really comes in and you know there's a lot of times where we sit there and say okay we're looking for M&A we're looking to invest
in things but the valuations don't make sense. And for Mike and I, you know,
sense. And for Mike and I, you know, we've made our careers on waiting for those opportunities and saying, "Okay, these are the things that I want to buy when things don't necessarily go right for them, right? And uh you know, that's
really what excites us. You know, one of our uh one of our other founders last week, he got on the phone with me. He's
like, "I love this, Brian." I'm like, "What, Brian?" He's like, "This is the
"What, Brian?" He's like, "This is the one where you start like this. You're so
focused on like where are the opportunities? How do I go take things
opportunities? How do I go take things over?" Um, and you know, it's I say it
over?" Um, and you know, it's I say it to some people every once in a while is that I feel like when uh there's headwinds in the market, it's actually easier to do this job.
>> Right.
>> Right. Than when the tailwinds are kind of blowing at 1,000 miles an hour.
>> But can I ask how have you set up the company to make sure that you're not the distressed asset when the contract if the >> look at our look at our construction of uh customer of our customer contract portfolio, right? is everybody last year
portfolio, right? is everybody last year talked about how customer concentration and exposure to Microsoft was a bad thing, but they have a better balance sheet than the US government, right?
Like I'm not worried about them performing in their long-term obligations to us. Like that's basically the best possible position we can be in.
And we've been super thoughtful about the way that we choose which customers to work with and how we manage the credit exposure so that we're like we're certain that the investments we make will be paid back. And if you look at
the people that are providing us the the debt to do those projects like Blackstone, right, they're the some of the most sophisticated people in the world. And for their underwriting
world. And for their underwriting committee uh committees to come in and say, "Yes, I want to do this and I want to scale it up as aggressively as possible." Like, you're telling me
possible." Like, you're telling me you're going to pit some financial analyst against John Gray? I'm going to go with John Gray.
>> Yeah. Well, well, well, you know, I mean, may maybe a second on just like kind of one of the fundamental building blocks of how we um have expanded the way we have and how we use debt because
I think that's one of the misunderstood uh uh components of how you build or how we have built this company. Um and so it
is really important to understand that we the the way that we build the components is we go into the market.
Let's use Microsoft because we've used them, but there's lots of other clients you could use and they're totally interchangeable. Um, from from the
interchangeable. Um, from from the perspective of the the structure uh is still the same. We go to them and we say, "Hey, um, you know, we've got we've got access to to to this data center.
Um, they say we need compute." We say, "Okay, we're going to sign a contract."
They sign a contract for five years. We
structure that contract um in a way that we can go back out to the Blackstones of the world and we can borrow money from them to go ahead and build the
infrastructure to deliver to Microsoft within the 5 years of the contracted period with Microsoft. We pay for the
infrastructure, we pay for the opex, we pay for the uh um uh interest and we earn an enormous margin on the infrastructure.
So yes, there is debt. We're not arguing that. We believe fundamentally when you
that. We believe fundamentally when you build any type of uh infrastructure at this scale, debt is the correct way to
go about doing it. The examples run through history. Whether you're talking
through history. Whether you're talking about building a power plant, building a uh um uh distribution grid for electricity, whether you're talking about the telephone, whether you're
talking about the steam engine and railroads, like you go throughout history, this is the tool that you use, right? We didn't invent anything new
right? We didn't invent anything new here. We just took a uh a tried andrue
here. We just took a uh a tried andrue method and applied it to the specifics of depreciation associated with this asset of the obsolescence curve associated with this asset and made the
contours so that it worked in an airtight manner so that guys like John Gray or you know Blackstone or any or Black Rockck or any of the big lenders could look at it and say I understand
how they're going to underwrite this. I
understand the risk in this. I
understand that these guys are going to deliver compute to that balance sheet.
they're going to get paid back and when they get paid back, we're going to get paid back. So, let's lend them the
paid back. So, let's lend them the money.
>> And that's lost on the market. They
think we're running around with this like um you know, incredible capacity to take on risk. But that's a really lowrisk approach. Matter of fact, it's
lowrisk approach. Matter of fact, it's way more lowrisisk than saying, "Hey, we're going to do it on equity because we're saving our equity for the long poles that you've got to invest in."
That's where you want to put your bullets. You want to use the debt
bullets. You want to use the debt markets to deal with a depreciating asset. It's the way it's done. It's the
asset. It's the way it's done. It's the
way it's been done throughout history.
>> Yeah. By the way, it's great that we're able to have this conversation. This is
what we want to do on the show is take this complex stuff, >> talk about what the reactions have been in public, speak with the principles, and actually get the story. So, thank
you for talking it through with me. And
on that note, let's continue. Um the the argument I think that would be made uh is not that Microsoft isn't good for the money. The argument would be made that
money. The argument would be made that generative AI is still a developing category. It hasn't really shown the
category. It hasn't really shown the ability to turn consistent profit. And
so comp the companies that are investing in a big way in it may one day wake up and say um you know we we can't we don't really want to uh do that buildout. uh
Open AAI for instance, let's just use them as an example, they have something like 1.4 trillion uh committed to spend on infrastructure. Uh I think Open AI
on infrastructure. Uh I think Open AI might be the only ones that believe that they'll actually spend that 1.4 trillion and maybe they're investors. So what do you think about that risk that AI is
because AI is new and not as predictable as you would have uh in a different category you know finance by debt uh that therefore it is riskier even if the
credit rating of a company like Microsoft is golden. So um when when you're a couple things on OpenAI because
they're they you know they are um the tip of the spear in in in many ways for artificial intelligence. They have uh a
artificial intelligence. They have uh a franchise that has um 800 million uh monthly users of their product which is
fully onetenth one out of every 10 human beings on the planet logs on to open AI >> fastest growing tech product history.
>> I use it all the time for every I am addicted to it and and I don't even find it in like a bad addiction way. It's an
amazing product. I won't argue with that. So, so, so you you've got this
that. So, so, so you you've got this this this product that's out there, and then you have this $1.4 trillion, which I believe has been uh confirmed by
everybody, but OpenAI, who would actually probably have issues with that number in terms of how much they're spending, when they're going to spend it, um what what are options, what a
firm, all those kind of things. And so,
I just think it's a you know, narative shaping there. There's a there's an
shaping there. There's a there's an incredible uh amount of people out there that are uh talking through how this is going to be done, when it's going to be
done. Um um and I don't think that they
done. Um um and I don't think that they necessarily have all the correct information. Um that's number one.
information. Um that's number one.
Number two is is that you know, you listen to both Brian and I talk about how we think about credit. We're pretty
sophisticated how we think about credit.
We've built our entire careers long before we started this company uh thinking about risk management in credit. Open AAI will be a percentage of
credit. Open AAI will be a percentage of our credit exposure just like Microsoft will be a percentage of our credit exposure. and the way that
you manage credit against a unbelievable potential company, but a company that may not have the credit rating that is
strong enough to support um um their aspirations or they may have to tone it down or they may is you just make them a limited percentage of your overarching
business and you accept the risk on that while you mitigate the risks using credit from other companies. companies
um like Meta that we signed a 14 billion contract with like Microsoft I mean just incredible companies and so you just think of them as how much investment grade exposure am I going to take how
much non-investment grade exposure am I going to take and what's the correct ratio and how am I going to mitigate that over time and that's the way we look at it >> and what happens if one of these
companies over time wants to walk away let's say meta says yeah actually artificial intelligence we can develop it much more efficiently or Microsoft says yeah AGI is actually a decade away
not three years away >> yeah so so AGI being a decade away six decade it doesn't matter like like the way you know you you were you were asking about you know how you run a
company in in this dynamic environment how you run a company that's going through this type of scaling and and I talk about this internally to the company all the time we need to be directionally correct the world is
incredibly uh uh fluid The world is incredibly dynamic. We are um at the absolute
dynamic. We are um at the absolute bleeding edge of a new technology that's redefining the world. You're not going to get everything right, but directionally you have to go ahead and
build a company that's moving in the correct ways to be able to take advantage of this super cycle that's going on. What do I think if Meta says,
going on. What do I think if Meta says, "Hey, we're going to, you know, we're not going to uh continue to invest."
That is their prerogative as a company, but that doesn't in any way mitigate their contractual obligation to us through the term of the agreement that
we went to Blackstone with and said we're going to borrow money because we have a firm contract with Meta. That's
not open to uh uh um renegotiation. They
can't walk away like like the concept is is is and you know there was a wave of this that took place you know about a year ago. Microsoft is walking away.
year ago. Microsoft is walking away.
Like, what are you talking about? This
is a AAA company. They don't walk away from anything. If they make a a
from anything. If they make a a contractual obligation, that's a contractual obligation. The even the
contractual obligation. The even the idea that they would walk away from it is deeply misleading to the market.
>> Okay. Uh there's been some analysts that have talked about one more thing on debt, then we'll move on. Some analysts
that have talked about uh core weave borrowing more money be uh because uh they spend more money than they can get structurally. So they borrow to pay
structurally. So they borrow to pay interest on the last loan.
>> Why don't you talk about how these DD like these actual debt instruments are structured from like the box perspective and how the controls around these things are like that'll put this to bed.
>> Yeah. So
>> like let's just be done with this.
>> There there's a there's a lot of a lot of analysts that have a lot of opinions um based on
a a deeply um incomplete understanding of how these are built. So may maybe two seconds on it and then Brian you can kind of keep me on the rails here. Um
>> I'm pushing you off the as much as I can record.
>> Once again going back to to the contract. We did a contract with Meta,
contract. We did a contract with Meta, right? When we did a contract with Meta,
right? When we did a contract with Meta, we go ahead and we sign the deal with Meta.
We go we borrow the money from a syndicate of lenders and then we go and we buy the infrastructure to build that facility.
We run the facility. When we run the facility, as we're delivering GPU capacity to Meta, Meta sends money, but it doesn't come to us. It goes into
what's called a box. Money flows into the box and then it goes through a waterfall. The first thing it does is it
waterfall. The first thing it does is it pays off the opex associated with the power and the data center. The second
after it's done paying that, the second thing it does is it pays the interest to the lenders. The third thing it does is
the lenders. The third thing it does is after it's paid all of the expenses is it releases back up to our company and >> also principal >> and principal and interest so that it it
it completely amvertises within the five-year term of the uh contract with Meta.
Like there's no like it's controlled by somebody else.
>> Yeah. And and the important piece of this is like >> it's not that it's hey we just barely pay off the interest. The coverage ratio
in that box is excellent and it can be underwritten at a very narrow spread based on the risk analysis of the most sophisticated
lenders in the world. Right? They're not
lending us this at 22%. They're renting
they're lending this at, you know, 250% over uh excuse me, 250 basis points over uh sofur, right? Which means basically they're looking at it as like this is a
lowrisk transaction to get their money back. It's not some crazy, you know,
back. It's not some crazy, you know, you know, yolo structure. It's an
unbelievably risk mitigated structure that's built to simply go ahead and allow us to build the infrastructure, deliver it, and then take the revenue.
Now, when you're scaling a company at the rate we're scaling, it tends to make sense that you're going to be investing all over the place. And
we are. We're investing in data centers.
We're investing in software. We're
investing in people. We're investing in, you know, uh uh um the the companies that we're buying to to to help us reach up the software stack and provide more
value. We're doing all of those things,
value. We're doing all of those things, which is exactly what we should be doing right now as this space opens up.
Whenever we see an opportunity, we look at it against all the other opportunities that are out there and say that one makes sense for us. It drives
the company forward. the the idea that you're at risk from the debt.
I mean, anytime you have debt, there's risk. Not going to I'm not going to
risk. Not going to I'm not going to argue that point because you have to generate the revenue. But what are you talking about? You're talking about
talking about? You're talking about operational risk on the GPUs that are in the box, >> right? You know, one of the things for
>> right? You know, one of the things for us and why our spread on that interest rate has compressed over the last two years is we've demonstrated incredible capacity and capability of delivering that infrastructure, right? The first
time we did uh uh one of these debt syndicates, I got paraded around the whole world and had to sit with every single underwriter being like asking me questions about like uh like what are the doors to get into the data center?
Like what is the floor made out of? like
okay guys like that there had so much there was so much risk around our ability to operationalize it that has been put to bed now where everyone knows that we can do this and we can do it at scale
>> right that our cost of capital is significantly compressed >> I mean it went from you know what was it >> surp plus 1350 down to sofur plus 400 right
once again like for those who don't understand what that means is the higher the the higher the interest rate the higher the risk And what you're seeing is the lending market understand that we
have the capacity to deliver this infrastructure and that they are willing to lend us money at increasingly lower rates because they look at it as a lower risk transaction.
>> Okay, I have uh so many more questions and we have only uh 15 or 20 minutes left. So uh let's take a quick break and
left. So uh let's take a quick break and come back and talk about a few things uh that I find really fascinating that is the depreciation on uh these AI chips.
Uh maybe a little bit about the financing structures and then power. I
think we need to talk about power. So
let's do that when we're back right after this. And we're back here on big
after this. And we're back here on big technology podcast with uh the founding team of or twothirds of the founding team of coreweave. Uh Michael and Trader is here. He's the core CEO and Brian
is here. He's the core CEO and Brian Venturo here is here. He's the core we've coo chief strategy officer. Uh we
talked previously or in the first half about um how these chips run hot. Um so
let's just talk a little bit about the life cycle of these chips. I I'm trying to figure this out. There's two
differing opinions. One is that a a GPU like the Nvidia H100 or the GB200 will burn as hot as it possibly can for like two or three years and then
effectively be useless like meltdown.
It's like the life cycle of a car compressed into a couple years. Um the
other side of it is that uh no this the GPUs can last. Uh but um they get less valuable over time because more powerful
GPUs come out that are multiples uh in terms of their ability to to um do AI calculations compared to previous generations. So can we just start with
generations. So can we just start with like the basic physics of this? How long
do these things last?
>> So Oh, I'm taking this one. You're out.
>> You take the physics. I'll
>> the other side. Uh so last year is when we saw um let's call it the the hyperscalers that were around in the
2010s. So uh Amazon, Microsoft, and
2010s. So uh Amazon, Microsoft, and Google finally uh retire their Nvidia K80 fleets. And the K80 was a GPU that
K80 fleets. And the K80 was a GPU that was introduced in 2014. So it's it was active in their clouds almost fully utilized for 10 years,
right? And the number of um you know of
right? And the number of um you know of changes in architecture and efficiency advancement and performance advancement over those 10 years was massive. You
know just last week um we entered a multi-year contract to renew Nvidia A100s which are the GPUs that were introduced in 2021.
Right? So we're already going beyond the 5-year contract life for GPUs that came out, you know, four years. um that the idea that these things burn out in 2 or
3 years like it's kind of bunk, right?
And from a physical perspective, right, within 3 years, these things are all still under warranty. So if they break, they get replaced, right? But from a like this is not they run hot. These
things are designed to run hot. Um GPUs
that we had deployed in 2019 are still running, still have customers on them.
Um, you know, it is a like some of it is customers that are deploying Grace Blackwell with us today.
They're going to use Grace Blackwell for their most frontier or bleeding edge use cases. They're going to train their
cases. They're going to train their biggest models. They're going to do the
biggest models. They're going to do the things that they need the like the newest >> Nvidia's latest chip.
>> Yeah, it's Nvidia's latest chip. They're
going to do the things that they need the most uh firepower to do and they're going to run their inference on hoppers or they're going to run their inference on ampear, the A100s, right? or they're
going to run different steps of their pipeline on A100s or they're going to run parts of their pipeline on CPU compute, right? There's always going to
compute, right? There's always going to be a use for these different levels of compute infrastructure. It's just where
compute infrastructure. It's just where is the economic value there, right? It's
not a useful life question. It's where's
the economic value in those in that time.
>> And this is where this is where the the questions start to build up because um so that the chips run. We agree on that one. Now I've I've been um taught so
one. Now I've I've been um taught so thank you. Um the chips
thank you. Um the chips >> the chips run off the table and and so so now the question is when it comes to power right?
>> Hold on, hold on. Let
>> me let me finish this question and you can answer the last one, but I just want to finish this one. Right. So the
question No, no, no, no, no. I want I really do want to hear, but let me just put this out there and then you can answer whichever way you want. Okay. the
the old generations of of Nvidia GPUs, they're much less powerful than the than the newest generations. There's the
great the Grace Blackwell that's out now. Uh there there's Ver Rubin that's
now. Uh there there's Ver Rubin that's coming out. And that the argument is
coming out. And that the argument is that these newer chips, even if the H100, the Hopper can continue running.
Uh the new chips are so much more powerful that the value, right, because those H100s are being sold at 20 $30,000 a pop. the value of those chips are
a pop. the value of those chips are going to be much less because of the power of of the newer generations. And
then if you think about it again, if if these companies move from training to inference, right? If for instance, let's
inference, right? If for instance, let's say hypothetically there's there's a diminishing return to training a bigger model, then those bigger those more powerful chips can be used to run inference. And then a company like
inference. And then a company like Cororee which has hundreds of thousands of the older generation of chips is faced with a depreciation problem compared to the most powerful ones.
>> You got it.
>> So, so let's uh >> let's go through this a couple different ways. Okay.
ways. Okay.
>> All right.
>> Um I feel like the depreciation narrative is uh being uh spun up by um folks.
>> Yeah. like like people that don't understand the space never been in a data center.
>> So So like my my theory here is is it's it's it's being spun up by a bunch of folks who couldn't spell GPU two years ago and now they are out there as experts on how it actually works. So
let's actually go through the different pieces of it.
The most important tool that I have for understanding what the depreciation curve or the obsolescence curve of compute is is not what I think right.
It's not what you know uh some historic short thinks. It's what are the buyers,
short thinks. It's what are the buyers, the most sophisticated companies in the world willing to pay for today? And when
they come to me and they put in a contract for a five-year deal or a six-year deal, in what world do I not think that they who are the consumers of
this understand that there are new, more powerful chips coming out? Of course
they do. They understand it, but they also understand what their various use cases are. And they are saying to
cases are. And they are saying to themselves, "I'm going to buy this because I'm going to need it today. I'm
going to need it in 3 years, and I'm going to need it in 5 years. And what
the use is within my system will change, but it didn't become useless. It hasn't
become obsolete, right? and they know the new stuff's coming, yet they're still buying it because they know better than someone who doesn't know anything
about how compute is used. My
opinions around depreciation are informed by the only entities that get to vote in my world, which are the folks that are paying for the compute over
time. Those are the guys that get to
time. Those are the guys that get to vote. Everybody else is just looking in
vote. Everybody else is just looking in and guessing, right? That's number one.
Number two is Brian kind of made a point that we just had somebody come back and recontract for term for a term deal the H100s.
>> A100s.
>> No, at H100s at 95% of the value of what they were originally sold for.
Once again, not showing this catastrophic depreciation curve that you know has been voiced out there. I just
once again like For me, it's about the data because I need to make the decision to buy this infrastructure or not to buy this infrastructure. And so, I've got to
this infrastructure. And so, I've got to kind of look through the noise and decide, you know, are the big hyperscalers,
are the big labs, are the big buyers of this infrastructure who are looking at this saying, "This stuff will be useful for us for the next 5 years. Let's go
out and buy it." or should I go and turn to somebody who's never really understood how the cloud works, what a GPU is, what are the different uses as it moves through from the most cutting
edge models to other uses within the training as they go all the way down through inference to simpler, smaller models. And I think that's the way you
models. And I think that's the way you got to look at this thing is like what are you talking about, man? if if if Microsoft and Meta and the other big buyers are coming in and buying for five
and six years and I I don't really think that anybody else really should or gets to have what I would consider to be a an informed opinion on depreciation. And
since I'm selling on term contracts specifically to insulate my company from the depreciation curve, right, I know how much I'm going to make because I've
sold it to Meta for 5 years, every hour of every day. And they're going to pay for it every hour of every day. The what
what the curve looks like inside of that five years, that's already been priced into the deal I did with them.
>> Sorry. Go ahead.
>> Sorry. Well, I was trying to interrupt you there because I think that the in addition to the H100s which came out in 2023, >> right? We signed a term contract for the
>> right? We signed a term contract for the A100s at like within like the 95% of it original price range for on like on term last week or two weeks ago. Like that's
crazy.
>> Those GPUs are already 5 years old >> and they're they're like that useful life is there.
>> Yeah.
>> And everyone is saying, "Oh, it's not useful." Like they have no idea. They
useful." Like they have no idea. They
don't actually have the data. We're
sitting on all this data. We talked to every single one of these customers and you know one of the interesting things that's happened over the past years everyone was saying well where are all the enterprises last year and the enterprises weren't there because every
AI lab in the planet was like was in a food fight for capacity and the enterprises couldn't fight their way in right and now as we're finally getting enough supply to make it available to many people like the ground swell of
enterprises that are coming in to use this stuff is overwhelming right to the point that we're st we're still choosing what customers we want to work Right.
>> Right. We are like this is a supply constrained environment, right? And the
supply constraint keeps getting tighter and tighter and tighter, right, for these customers.
>> Okay. I have two more questions.
Hopefully, we have time to get through both of them.
>> Let's do it.
>> Um, we got to talk briefly about this circular financing question. Uh, just to set it up. Um, Nvidia owns 5% or so of
Core Wee. According to reports, it has
Core Wee. According to reports, it has agreed to spend 1.3 billion uh over four years to rent its own chips from
coreweave according to reports. Uh and
you also buy uh the the GPUs from Nvidia. So can can you talk a little bit
Nvidia. So can can you talk a little bit about like is this is this too tight of a relationship? Is this like sort of
a relationship? Is this like sort of demand, you know, sort of propping up supply which is propping up demand?
>> Nvidia has made two investments in Cororeef. uh they made an investment of
Cororeef. uh they made an investment of $und00 million and then they made an investment of uh and that was early that was in the the B round I believe um
>> at a $2 billion valuation.
>> Yeah. And then they made an investment of $250 million at IPO. Um Cororeweave
has raised uh $25 billion to build and scale its business. I'm
pretty sure that they don't think of their investment of $300 million as the secret sauce to standing up the largest company in the world. It's just a
ridiculous narrative. Um so look, um
ridiculous narrative. Um so look, um the reality is is you've got a systemically
imbalanced market, right? There are not enough GPUs out there to go ahead and uh support the demand for compute for
artificial intelligence. And when you
artificial intelligence. And when you have such a uh disequilibrium uh in a market, it is not unusual to see
companies working together to try to uh um >> align interest >> align interests and drive compute buildout or any other industry as fast
as possible.
Um Nvidia has been a wonderful partner of ours. Um and uh they have um
of ours. Um and uh they have um uh entered into uh a relationship with us which is great. They've entered into
uh and invested in other companies which is great. They're trying to uh invest in
is great. They're trying to uh invest in the ecosystem uh and cultivate the buildout of what they considered to be, you know, a a generational change in the way the world is going to work. And I
agree with them. Um, but do I think it is circular financing to invest a hundred billion dollars hoping that we're going to then go ahead and spend billions and billions of dollars? It
doesn't make any sense, >> right? Um, you know
>> right? Um, you know what their strategy is. I I don't I don't think it's really prudent for me to kind of guess at what Nvidia is doing. I I think of it differently. I
doing. I I think of it differently. I
think of it as um there is a relationship that exists between us and Nvidia. We provide um the most
Nvidia. We provide um the most performant configuration possible of their infrastructure and deliver it to the consumers of computational power and
they appreciate that. They build
incredible infrastructure that allows us to build our business and we appreciate that. Um and you know the the the it's
that. Um and you know the the the it's sort of like you're being distracted by a fly on the butt of the elephant >> and you know that's what this is. You're
talking about a very dimminimous uh sum of money that was invested. I
mean, it's a lot of money, but not in the scope of what we're talking about here. It's a dimminimous sum of money
here. It's a dimminimous sum of money from the perspective of, you know, the the company is worth $40 billion. It was
just a good investment. They looked at what we did and they said, "These guys rock. We're going to invest in them."
rock. We're going to invest in them."
>> Y >> right. So, you know, once again,
>> right. So, you know, once again, depreciation is one of the narratives that you hear continuously. Circular
financing is one of the narratives you hear and bubble is one of the narratives you hear all the the the the other way of looking at it is just the largest companies in the world can't get enough computing they're desperate to get their
hands on it so that they can serve their clients because it is profitable for their business and that seems to have a lot more uh there there to me >> right I I know we're running out of time
can I just ask the power question and then we can head out uh Sat Nadella was on a podcast recently and said he has more chips than he can plug plug in because uh the power is basically uh the
constraining factor for him. Y
>> u there's been so much I mean we talked about you guys building eight data centers in the most recent quarter. Uh
so much build out people are talking about how it's going to maybe even raise consumer prices for energy. Um is power the limiting factor for the continued ability to build out uh AI
infrastructure? So the um the constraint
infrastructure? So the um the constraint moves right and right now um I don't think that power itself meaning grid connections and the generation capacity
is the limiting factor. Right now it's the construction and trades. So it's
human labor and supply chain that are the limiting factor is that you went from a market that was building maybe one gigawatt of data center capacity a year to a market that's building 10 gawatts of data center capacity a year
and all the trade unions like they don't scale the same way, right? And you're in a position where you may have had a labor force of a thousand people building a data center and 200 of them
were experienced tradesmen that had apprentices and now you have a thousand people and 20 of them are experienced tradesmen and everyone else is kind of apprentices and you know that stretches
the supply and construction uh you know supply chain very very thin. So I think you're running into just this like temporary transient problem of projects are taking longer than people thought
they would, right? Uh that's the big blocker is that you walk in and you say, "Okay, I have a data center being turned on next week." And during your energization process, something goes wrong and okay, now you're set back by
40 days, right? There's hiccups that are happening along the way because things have gotten so stretched because demand has been so insane and has been increasing at a step function every six months for the last 3 years. But the
power is there. The power is there today.
>> Oh yeah, that's the key word, right?
It's like it's it's this is a you got to think about this stuff over time, right?
Um what will happen is data centers will be built power within the grid will be consumed and as that power gets consumed there will need to be new power brought
online in order to provide for the future growth as well as all the other uses for power that are are required and growing you know independent of
artificial intelligence. Um and that
artificial intelligence. Um and that will be a challenge for the US grid over time. it will be the challenge for you
time. it will be the challenge for you for grids all around the world. Um but
at the moment it's you know what what what is what is the problem that you're facing today? What is the problem you're
facing today? What is the problem you're going to be facing in three years? In
three years power is going to be an issue >> today. It's it's it's it's power chips.
>> today. It's it's it's it's power chips.
>> But power is going to be an issue until somebody like some college kid at Stanford's going to come up with a better way to run this in the software side and they're going to generate crazy efficiencies like we saw that with DeepSec back in January. The world
freaked out that this all of a sudden got more efficient. we need that to happen like 10 more times, >> right? Is like those efficiencies are
>> right? Is like those efficiencies are good. It brings in new use cases. It
good. It brings in new use cases. It
lowers your cost like your cost per token or cost per task. And for this to be like to really permeate society and to develop the most like the most good for humanity, we need the cost to drop a
lot, right? So like someone's going to
lot, right? So like someone's going to solve those problems and I hope they solve them soon.
>> All right. Well, Michael Brian, uh, so great to speak with you. Thank you for taking all the questions and talking through the tricky stuff and some of the fun stuff. And uh we hope that you'll
fun stuff. And uh we hope that you'll come back soon.
>> Thanks for having us.
>> Thank you.
>> All right, everybody. Thank you so much for listening and watching if you're here with us on YouTube or Spotify and we'll see you next time on Big Technology Podcast.
Loading video analysis...