LongCut logo

Who's Actually Funding the AI Buildout?

By No Priors: AI, Machine Learning, Tech, & Startups

Summary

Topics Covered

  • GPUs Evolved from Crypto to AI Backbone
  • Debt Collateralizes Contracts Not GPUs
  • Inference Demands Distributed Memory-Optimized Clouds
  • Stranded Power Unlocked by Storage Distribution
  • AI Enables Physical Hardware Scalability

Full Transcript

Hi listeners, welcome back to No Priors.

Today I'm here with Neil Tuari of Magnetar Capital. This is a $22 billion

Magnetar Capital. This is a $22 billion alternative asset manager at the center of the AI compute buildout. We talk

about the financial innovation depreciation of GPUs and what's next in AI compute. Welcome. Thanks so much for

AI compute. Welcome. Thanks so much for doing this, Neil.

>> Absolutely. You know, really happy to be here. So you are leading AI

here. So you are leading AI infrastructure at Magnetar. You're at

the center of the buildout, enabling it, financing it. For any of our listeners

financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is?

>> Sure. Um, so Magnetar has been around for actually this is our our 20th year.

Uh, we're an alternative asset manager and that can mean a lot of different things. Um, but we have three primary

things. Um, but we have three primary strategies. The first one is private

strategies. The first one is private credit. uh the second one is a venture

credit. uh the second one is a venture strategy and the third is more of a systematic or quantitative focused uh public strategy as well. And so I think you know when when people look at us and

and you know why are we here in this moment especially on building out AI infrastructure um I think a lot of it has to do with kind of our unique lens on helping to build uh capital intensive

businesses and using creative financing whether it's venture or other structures with unique elements and I think we're going to talk a lot about that but um to build out uh and optimize the balance

sheets for these capital intensive businesses. So, I remember hearing about

businesses. So, I remember hearing about you guys originally. So, you're the first investor I think we've ever had on the podcast. I'm excited about that.

the podcast. I'm excited about that.

Thank you.

>> Uh I remember hearing about you and Magnetar initially around I was like who's this big owner of Corewave and also um you know helping OpenAI with some of their early buildouts. When did

you guys first start looking at the problem and thinking about how to how to solve it?

>> Yeah, so we actually you know stumbled across the the compute problem before it was compute. Um, you know, we met uh

was compute. Um, you know, we met uh Coreweave back in uh 2021 and that was when they were actually transitioning from uh mining Ethereum into uh high

performance compute and at that time it was using the GPU as a you know an instrument to mine uh cryptocurrencies and interestingly that same instrument

could be used for high performance computing applications. Uh and the first

computing applications. Uh and the first one was uh visual effects uh which so think of like things like movies, Marvel movies and things like that. And so they were transitioning um at that point

between crypto mining into the first kind of uh high performance compute use case. And this is all before AI

case. And this is all before AI >> and so we made our first investment before the AI trade started. Um but we added a lot of optionality where you know we could envision a world where uh

the GPU could be used for a lot of different high performance kind of computing applications. I think um you

computing applications. I think um you know AI was on the radar, machine learning was on the radar for us. Um but

I wouldn't say that we could foresee everything that happened. we just

happened to be, you know, at the right place at the right time and we continued to double down um as the company progressed and started, you know, shifting into more workloads that were machine learning and and kind of AI training based.

>> Did you have like an existing significant data center investing footprint?

>> No, I mean I think you know uh interestingly at Magnetar there, you know, we have invested across asset classes. Um so we we've done a lot of

classes. Um so we we've done a lot of property investing, real estate investing as an example. um investing in energy. We had an energy business

energy. We had an energy business historically and so a lot of the elements for you know what constitutes a data center power energy land uh real estate you know we had a lot of the the

background in those spaces I think we were new to compute right like that was a new sector for us and so kind of those two worlds merging um you know we we obviously you know came up on the curve

on the compute side but we had a lot of you know background on um the the elements that constitute what it means to build a cloud. So you guys just really you were in this company, you saw

the demand and you said like it's going to grow and we're going to make this a big part of our business.

>> Exactly. I think you know what was interesting is we made our first investment in 2021 um and then about a year later we continued to see expansion of use cases uh for at that time it was

called high performance compute and then it was kind of towards the end of 22 the whole AI uh discussion started and as we entered 2023 uh coreweave uh started to train models

for open AI >> um and that's when things really started growing because the sheer amount of compute that was needed to train an LLM this was like the first time had ever been done. And what was interesting was

been done. And what was interesting was what kind of allowed them to take advantage of that opportunity was the historical kind of backgrounds of a lot

of the founders uh were in energy asset management. And when you fast forward to

management. And when you fast forward to today and you look and like what is what constitutes your ability to build a GPU cloud, it's your ability to manage these

highly complex assets. And it

fundamentally comes down to access to power and energy. And so they had these elements with them. They obviously

brought on a lot of talent on the cloud side. And to put all these together and

side. And to put all these together and at that moment it allowed them to um you know build very large scale reliable um clusters for OpenAI and obviously many

other customers since then. And I think the last comment I'll make is what really allowed them to kind of win this market early on was focus on two things.

It was scale and reliability. And I

think those were the two things that um are really difficult or a lot of the new entrance since then because scale has to do with your access to capital, your access to energy, power, data center and

then reliability really had to do with their their ability to manage a giant fleet of GPUs uh which is actually quite complicated. um you know whether it's

complicated. um you know whether it's reliability from you know GPU failures or software challenges you know building a fleet that can healthfully be online all the time at you know 99.9%

reliability is incredibly difficult and that's something that they had started back in 2017 2018 time frame and and they were at the right moment at the right place with the right technology

stack um to really build um the optimal cloud for that moment >> I've definitely experienced that with you know our portfolio of companies that are building large training clusters uh

uh it corewave has a reputation for reliability that not everyone has reached. Can you just help characterize

reached. Can you just help characterize if you fast forward like two and a half three years now like what is the scale of the problem today?

>> Yeah. So if you look at um kind of capex right let's starting with that. So capex

for AI compute and infrastructure in 2026, you know, at least from the hyperscalers is projected to be between 660 and 690 uh billion dollars. And over

the next several years um you know that scales to trillions of dollars, right?

And so the the scale of the problem is how do you build um you know that size of capex efficiently? And I think a lot of that has to do with not only, you

know, your ability to have access to, you know, those core elements, um, energy, power, you know, uh, and and your ability to have data center space, etc. But I think one of the things

that's not talked about as much is capital and access to capital and how is capital structured. Um, and what I mean

capital structured. Um, and what I mean by that is this is, you know, billions to trillions of dollars of capex >> and just using equity dollars alone is

not an efficient way to scale this.

That's obviously massive dilution. You

know, there's there's it's not an easy problem to solve.

>> When we first met, I had like slowly come to this realization. I was like, I don't think we should take the dilution for the cluster.

>> Yeah. Right. Exactly. And so that's where I think, you know, when you and I have talked about like structuring and and I can give a couple examples um if that's helpful. I think the first one

that's helpful. I think the first one was DDTL structures or SPV debt structures that um had a think of it as

like an SPV. Inside of the SPV are the cap is the capex, the collateral um which is the GPUs >> and the contracts themselves. Um and so

in this example, the actual asset or collateral was not really just the GPUs themselves. It was really the contracted

themselves. It was really the contracted cash flows >> from in this case investment grade counterparties and so I think the reason >> this is the consumer of the >> the consumer of the exactly you know

your Microsofts your your metas etc of the world >> and I think the reason um that was done is is really twofold when when you look at the scale of the problem uh you know

those particular contracts uh needed billions of dollars of debt to finance the capex you know obviously for a nason and new growing company that's that's

really hard to raise. Um so part of structuring it this way is ensuring that you have kind of guaranteed offtake on the back end to uh minimize the risk for

you know debt holders and I think that's a lot of what the market got wrong um especially when there was a lot of press about this early on where it was >> there's billions of debt on these highly

depreciating assets and it's extremely speculative and the what was oftentimes characterized in the media was uh these debt structures had GPUs as collateral

and that's like putting a used car as collateral which is obviously just going to depreciate incredibly fast. You know

that's a very risky kind of structure and I think what got missed was the the GPUs themselves were actually like the second second or tertiary level of collateral in those instruments. The

primary collateral uh was the contracted cash flows from investment grade counterparties.

Microsoft or Nvidia or somebody like that saying, "I'm committed to pay you.

I know you can pay me."

>> Take or pay contracts and they're like 5 years in length. So, I think that was like one feature >> uh that that's unique to talk about. And

then the second one really has to do with um the debt itself and how it amortizes. And so, like in simple terms,

amortizes. And so, like in simple terms, you know, when you have debt, you have principal and interest and you have to pay it off over time. And in these structures typically the payback period

on the capex was roughly 2 to 3 years.

Um and the uh structures themselves the debt was over five years you know four to five years in length where the entire debt amortized during the um outstanding

period that the that the debt was out.

And so at the end you ended up with zero balance uh for the debt and there was no balloon payment or or anything that was really due on the back end. And so the

question that often you know comes up uh is you know isn't that a very risky uh type of structure because these things are depreciating incredibly quickly. So

I think know there's there's two comments here. first is on that

comments here. first is on that depreciation question. In these kind of

depreciation question. In these kind of debt structures, it doesn't really matter because the debt's fully paid off by the end of the debt term against committed contractual um you know

contracts from investment grade counterparties. Um and then at the very

counterparties. Um and then at the very end the the actual upside or residual value and I know there's a lot of questions on on residual value is is

held by um you know the uh the cloud player in this example right Courte right or or you know any others >> um and that's a really interesting prospect because you can see a world

where all of this capex is paid off incredibly quickly and there's an opportunity to redeploy it um where you can redeploy it um without having to pay for any additional uh debt obviously

against that redeployment.

>> How have the instruments changed?

>> They've changed in several ways where uh you know the first is you when you look at these SPVS I think you're starting to see ways to change the portfolio

construction of who can go inside of one of these debt structures. And so, you know, early on in the early days, these were all only investment grade counterparties >> because there was the the space was so

nent, the operators had no experience.

And I think now what you're starting to see is a blend of investment grade and non-investment grade. So, like what does

non-investment grade. So, like what does that actually mean? What that means is, you know, you're you're seeing these structures with investment grade counterparties like your hyperscalers and your other corporates that that are

IG um mixed alongside uh some of the AI native companies. And so think of the AI

native companies. And so think of the AI model companies, the labs, software companies that are building AI startups.

You're seeing those companies get mixed in alongside um the IG companies to build a portfolio because now you have, you know, the the history that you can do this and now you have structures where you can kind of balance the risk

uh wi with IG and nonIG. And we're

continuing to see that kind of move to be able to help finance, you know, really the model companies and a lot of these startups. Obviously, that was

these startups. Obviously, that was difficult to do, you know, three or four years ago. starting to become easier um

years ago. starting to become easier um as these companies have more runtime and ability to uh you know make the compute fungeible.

>> All are uh portfolio companies that buy compute tell me it's a supply constraint market today. One is that true and two

market today. One is that true and two when you think about like uh continuing to grow your business or grow this ecosystem like what's going to stop it

like what could slow down a buildout?

>> Yeah. Yeah, I mean I think what's interesting is uh if you look at like 2023 2024 we were very supply constrained and the supply constraint was chips and no one could get access to

chips.

>> Yes, we bought chips.

>> We bought chips, right?

>> And you know there was this thought that okay there's going to be an overbuild of chips and then the supply constraints will go away. Well, you know, fast forward to 2026 and what we see is, you know, there is obviously more

availability of chips, but to build and operate these uh, you know, data centers requires people, power, infrastructure, a lot of these things that uh have a lot

of of bottlenecks. And so, actually taking these chips and then making them into useful revenue generating assets is really the bottleneck. It's also not

clear that there is supply of chips at the latest generation at scale.

>> That's true.

>> Soon, which is how everybody wants them.

>> Exactly. I think, you know, you see um not only you're starting to see interesting and not only just the the high-end players want access to the latest chips. You're seeing the latest,

latest chips. You're seeing the latest, you know, obviously startups want access to those. And I think it has to do with

to those. And I think it has to do with efficiency. Mhm.

efficiency. Mhm.

>> Um, you know, one of our friends or one of your friends as well, Dylan Patel over at Semi analysis, posted this interesting article last week on inference and inference spend an

inference kind of performance. Um,

>> and you know, there's a lot of, you know, jokes made about Jensen math. Um,

and it was interesting because the >> seems pretty good at math.

>> He's actually great at math. Um, and so for the uh hoppers, the H100 or H200 series of GPUs into the black wells, uh, there was a claim made that it could be

30 times more efficient. And I think the data from, you know, some analysis showed that it was 90 to 100 times more efficient in terms of inference performance.

>> And so I think part of the the need to go to these new chips is not is yes, more computing power, but it's actually the it can be cheaper to operate more performance. Price performance. Exactly.

performance. Price performance. Exactly.

>> Mhm. Yes. My favorite Jensenism is the more you buy, the more you save.

>> Exactly. It's actually true.

>> Yeah. Um crazy. Um help me address like this uh criticism around circular financing.

>> Yeah, I know. Um it's obviously a topic dour and I think you know the way we see it and frame it really has to do with the demand signals um and who are the

eventual buyers and and how is this being used? And so at least from what

being used? And so at least from what our perspective, we we continue to see uh insatiable demand. Um and if you go back to, you know, the previous kind of

big tech buildout back in the early 2000s, there was obviously a lot of fiber that was being built and you had dark fiber, you know, in in an overbuild happening. And I think what you see here

happening. And I think what you see here is I I have, you know, you don't see any dark GPUs, any GPU. Exactly. Any GPUs

used. Yeah.

>> Um and then number two, you're starting to see uh actual economic value. Um so I think last year enterprise AI had about 37 billion of total TAM. Um and it's

continued to grow like crazy and at least personally and and I'm sure you see this too, but I use these tools all the all the time and I find incredibly valuable, right? The actual tokconomics

valuable, right? The actual tokconomics of positive uh ROI is is actually here now I think from our perspective. Um and

so that the circularity you know comment I think applies when you're building um you know speculative uh compute and capacity uh or if you're you know purely

doing vendor financing and it's you know you're trying to do some type of you know unique some type of you know revreck type item related to that and that that's not what we see like what we

see is financing to support to build out the demand against uh use cases that are very positive in their ROI and so like our perspective is that that's uh you

know not a real real concern that we have um and and it really has to do with who are the ultimate buyers here.

ultimate buyers have been and at scale the hyperscalers they're deploying this uh at scale and the economics are positive uh when you look at a unit economic basis in terms of uh deploying

intelligence um and I think we're at a moment in time where you we're really starting to see that >> in my own experience um I have been a heavy AI user for several years >> but reasoning advances the ability to

scale up inference especially around code >> means I'm up against my max limit all the time in a way That was not true uh uh uh initially. How does the inference

workloads actually growing? I mean it's a it's a good demand signal that there is value but how does that change your business?

>> Yeah. So I think one thing that's interesting that we're seeing is obviously there's been the shift from training to inference you know over the last few years that that split continues

to grow on the inference side as usable uh and ROI positive applications get developed. I think the two things I see

developed. I think the two things I see on the inference side now is um inference has is a lot more complex than I think initially thought and what I

mean by that is it's not as simple as um you you train a model and then you it's easy to inference it in some certain cases you can do that on similar infrastructure but there are issues

around latency um fungeibility of that uh and and really optimizing the cost of your compute on the inference side um how do you manage uh you know peaks of

inference demand and and obviously it's not linear like training you your GPUs are on all the time you know 100% of the time and so with inference you have a lot more variability

>> um and so there's a lot more nuances uh in in optimizing inference I think the second thing that's observed um that I've seen is uh inference is definitely a memory problem a memory throughput

problem um you know on the inference side you know you have these kind of phases called prefill and and decode, right? And how you optimize that across

right? And how you optimize that across a fleet of GPUs is actually unique technical problem.

>> Um, and then the third is what I would say is distribution.

>> Um, you know, a lot of times training infrastructure is is quite centralized.

What you're seeing with inference is in many use cases as this becomes more ubiquitous, you're going to have more and more decentralized uh, inference clusters. And actually one

of my favorite companies is one of your companies, B 10, which is really, you know, optimizing distributed inference at scale. And I think one thing that's

at scale. And I think one thing that's interesting when you look at companies like that and and other inference clouds is how do you optimize the uh compute

and and build out these clusters that could actually look very different than a training cluster where training cluster might be 50, 100, 150 megawws in one kind of four walls. Mhm.

>> I think you're starting to see distributed inference which could be, you know, four or five megawatts and five separate data centers and stitching them together in different areas, right?

And that looks very different from a kind of power perspective, how you, you know, the software matters a lot more when you're doing like distributed inference. And then in terms of your

inference. And then in terms of your question how it impacts us I think one of the things that we've been you know focused on is um you know where we started this conversation with you on um

financing compute that was really obviously uh it started with mostly training um a lot of those hyperscalers are now doing a lot of inference on that same infrastructure but these are

investment grade counterparties you know it's easy to it's easier to lend uh money to build out these clusters to those customers I think now that you have this new crop of inference clouds

and application layer companies that are needing tons of inference. I think the the key question that we're really focused on is how can we finance the next build which is distributed

inference. Um and maybe the last you

inference. Um and maybe the last you know one or two takeaways would be uh one thing I'm seeing is you know for every application layer company out there the highest line item from cogs is compute

>> um and then the inference companies and inference clouds out there most of them are um purchasing up compute from either

other clouds or unused act uh capacity and when you look at like margins for that you've got like layered margins >> and so there's a push to kind of own your own infrastructure

>> um to really drive and increase you know uh profit margins but also it's the ability to kind of have control of your own destiny and I think a lot of folks are starting to the application layer companies and inference clouds are

grappling with how can we build and own and operate our own infrastructure um and that's something I'm I'm really looking into >> I am too and I think one of the things that uh is going to make a big

difference in this ecosystem is like can the inference clouds like base And can they deliver reliability that you would expect from a a cloud like a traditional

cloud?

>> Um because the like uh distributed data center operations that you know they consume today do not offer that reliability. Right.

reliability. Right.

>> And the other thing that's interesting is um you know this is additional reporting from last week. Um if you're familiar with silicon data they they put together a lot of you know data on spot

pricing and price per token performance.

This is Kerman Lee's company. And one

thing that that I think was really interesting in some some in an article she uh published last week uh had to do with how two pieces of compute that look identical on paper have wildly different

performances. Everything from

performances. Everything from reliability to cost to speed. And I

think as you distribute um you know have distributed inference, how do you m um you know mash together very different types of compute and try to optimize for reliability I think is super

interesting. Um and that gets to kind of

interesting. Um and that gets to kind of one thing I I find really interesting that Nvidia is doing is is this concept of AI factories >> and building AI factories um you know

behind corporates and AI companies. And

maybe the way I unpack that is you've got kind of more large monolithic cloud players, the hyperscalers and the neoclouds that are building large scale um you know cloud environments. uh and a

lot of where I think Nvidia and others see this going is yes those are going to be important components and those are going to be huge markets but corporates

fortune you know 500 AI companies that use a ton of compute will want dedicated AI factories associated with workloads that they run and that they have control over. And so I think you're starting to

over. And so I think you're starting to see, you know, the early indications of how do you finance and build out uh almost think of like literally AI factories that sit on prem with a

company that can operate their workloads.

>> Uh >> you're talking about my Mac mini farm.

>> Exactly.

No, but but all joking aside, I I think one thing that is another supporting factor for use of all of the compute we have is and and can create over the

coming years is um power is clearly the limiting factor.

>> Um it's easier to get more power in smaller >> units. Yeah,

>> units. Yeah, >> I think that as inference demand is growing these uh anyone who has uh usable compute for inference is going to find a lot of partners for offtake.

>> Exactly.

>> Okay, let's look at the future a little bit while we while we have 10 minutes.

Um uh let's talk about the the macro.

Like people talk about energy, they talk about um natural gas, uh the grid, the slowness of nuclear. like what do you think about over the next 6 or 12

months?

>> Over the last year, I've been spending a ton of time in the power and energy markets um and looking at interesting solutions that can help scale power, you know, for the gap that we see. I think a

few observations that we've seen. The

first is um we do have a power problem, but I think it's a bit more nuanced than than a lot of the reporting out there where >> it just we can't generate.

>> We can't generate. Yeah. I think there's actually quite a bit of stranded power across the grid across the country. And

what I mean by that is, you know, a lot of the utilities are built in a way where they're focused on peak power, right? So they've got natural gas

right? So they've got natural gas peakers and they're focused on, you know, providing peak power for those moments where demand is is kind of off the charts. Um, and that's obviously

the charts. Um, and that's obviously only for a few days out of the year. So

there's lots of generating assets out there. Uh, the question is they're a bit

there. Uh, the question is they're a bit stranded, right? And so there's kind of

stranded, right? And so there's kind of I I look at the power problem as being kind of multiplefold. The first one is how can you take the power we have on the grid and actually make it usable.

And and a lot of that has to do with flexibility and storage. And so we've been spending a lot of time looking at an energy in the energy storage business and distribution. How can you store

and distribution. How can you store unused capacity, peak demand shave uh capacity, store it and then distribute it when it's needed.

>> Um we made an investment in a company called Taurus. I think I I mentioned to

called Taurus. I think I I mentioned to you which is building like this distributed utility layer uh almost like this mesh infrastructure to um takes to

store excess capacity or store capacity from a variety of of sources and then distribute it at the time when it's needed. And so I think that's kind of a

needed. And so I think that's kind of a critical layer that that needs to be built. Um and then longer term there is

built. Um and then longer term there is a generation problem but I think in the shorter term it's really it's more on the distribution and storage. Uh and

then um the other piece I would say is you know the true bottleneck um at least in the short term the next 6 to 12 months is is incredibly I don't want to use the word simplistic but it's things

like uh structural steel it's uh finding electricians uh that can you know build >> sorry there's you can't get enough steel >> you can't get enough steel you can't >> this is not something I was aware of

like you can't get steel you can't get uh you can't find enough electricians to build out you know the power infrastructure uh substate transformers, air chillers. These are

like very specific power infrastructure needed to just get to a point where you can start to build a powered shell on a piece of land. And so the bottlenecks in

the short term really are uh people equipment. Um and then the other

equipment. Um and then the other interesting thing is that on the generation side, what you're seeing is regulatory obviously is is a big challenge. And so there's a combination

challenge. And so there's a combination of bring your own capacity. There's a

lot of that that's that's interesting right now. And so a site that can

right now. And so a site that can potentially grow to 50 megawatts might start with only 10 megawatts of grid interconnect, but can you add solar net

gas um turbins, put these various bring your own capacity kind of pieces of technology together to make that site usable? And so I think a lot of what's

usable? And so I think a lot of what's being looked at and a lot of what I'm looking at right now is really on on the bring your own capacity at least in the short term. Yeah, I think um if people

short term. Yeah, I think um if people don't know the uh origin story of Crusoe and Flur gas, like it's actually really interesting as an example of, you know,

there is actually lots of energy, lots, you know, some energy out there and, uh you can make much more of it consumable.

>> Yep. Exactly.

>> Couple topics to hit before we lose you.

Um >> uh new players, how do you think about the sovereigns and what they're doing in their buildouts? Yeah, I think um

their buildouts? Yeah, I think um >> they seem to be able to fund themselves just like >> Exactly. Right. Um you know, you saw the

>> Exactly. Right. Um you know, you saw the news from India last week. Uh obviously

a lot of the news in the Middle East, Southeast Asia.

>> I think you know, we're continuing to see that sovereigns view compute and AI, you know, as and and even we do here in the in the United States as as as a matter of national security.

>> Um and obviously the funding of those clusters is is very different than funding like a private cluster. And so

you've got, you know, government capital that can be used for that. I So I think there's two things that, you know, I find interesting in that space. I think

one is who are the partners um that are going to build those that capacity >> and what are the cyber security kind of implications and environments for that.

And so those those are the two nuances I think with sovereigns is they need to find players that can rapidly scale compute um in the in their countries.

and often times they don't necessarily have these players that know how to build and scale GPU comput and help build you know sovereign ecosystems around the world and then there's a matter of cyber security and

how do you make it into a a a truly um you know safe ecosystem for for those sovereigns and so I think there's a lot of work to do still on the cyber side um especially as you look at you know

scaling sovereign AI >> what is your thinking on physical AI it's another you know if it works capex intensive Absolutely. And you know, maybe I'll

Absolutely. And you know, maybe I'll just take a second to say one of the things that we observed um from 2010 to like, you know, the early 2020s was we were in a very capital asset light mode

of build. Like SAS was, you know, you

of build. Like SAS was, you know, you never heard Magnetar and SAS, right?

Because it was just purely asset light.

>> Compute and everything we saw starting in, you know, 2021 is asset heavy.

That's where you started hearing a lot more about us. And I think physical AI is actually an extension of that. And so

what you're seeing is part of the reason I think and I think we all have scars from the 2010s of hardware companies that did not make a lot of money for us.

Part of the scars was it was so difficult to scale hardware companies.

Um you know because the software was so difficult to build. You needed to spend so much money building the hardware. The

software was an afterthought. What

you're seeing now is now that you have more generalpurpose uh software via AI uh it can make the hardware easier to scale because you have you know software that can be you know can interact with

more more hardware and so I think the natural kind of extension of what we see is kind of what happened in the compute markets where you really needed flexible capital where it wasn't just equity it

was debt and you know a variety of project finance to really scale capex you're going to see that same kind of need uh in physical AI and it simply has

to do with capital intensity right you know on the compute side for like cororeweave as an example they needed billions of capital to scale uh you know that cloud and I think whether it's a

robotics company or whether it's a you know uh a manufacturing uh focused company drones defense all of these areas are incredibly capital intensive

and then now that you add AI into them I think it can help them scale faster uh quite frankly and uh capital intensity is still there. And so there's a moment in time now where you're going to have

to really look at optimizing balance sheets um for physical AI to really grow and scale.

>> I think to your point of how the um early AI compute contracts were structured um I I went from you know learning to be an

investor in an era and an environment where robotics was a great way to lose a lot of money for a long period of time.

you remember that.

>> Um, now I sit on the board of two robotics companies. So, let's hope it's

robotics companies. So, let's hope it's not true anymore. But I I'd say like it's just a question of capability to me like you know whether it's in the home or in industrial settings where like it

is simply not a good human job or we don't have the labor.

>> Yeah.

>> Um, you are going to have if I I think the products will support investment grade buyers >> who are going to have contracts that say like we want it and you can raise debt against it.

>> Exactly. Right.

>> Um and so I think actually that that feels of a very similar um shape. Last

question for you because it is so timely. What do you make of the general

timely. What do you make of the general capital rotation out of out of software the end of software and it's all it's all infrastructure labs and AI natives I guess.

>> Yeah. Yeah. It's interesting to see that every day there's another industry that kind of tanks whether it's you know you saw the wealth advisor tank for a few days you saw the consulting consulting companies you saw real estate payments

real estate right. I mean I think what you're seeing at least is at least in my view what I saw was towards the tail end of 2025 and into 2026 like there was at

least in my view a big step up in performance of usable AI and I think you know what Anthropic was doing really and claude and like we use it all you obviously we use all the models but you

know there was a definite step up in performance in making AI usable and seeing that it can you know truly disrupt these you know nonAI native industries

Uh I think the reaction and rotation out of each of these names is a bit much because when you I think there's there's two factors I look at. One is when you look at valuations as an example, I

think um from a free cash flow perspective, SAS companies are are are valued at at the lowest they've been in in in years, you know, and there's a huge margin difference between, you

know, what those rev multiples are today and what what they've been in the past.

And so free cash flow margins have steadily increased significantly for SAS as a whole over the last four or five years and revenue multiples have stayed, you know, you know, the same or gone down.

>> And so to me that's a bit of an exaggeration because it really has to do with individual names versus sectors.

And I think that's kind of at least my take is like in all of these sectors there are individual names that will learn how to maximize their, you know, uh, value using AI and there's those

that won't. Uh but what's happening

that won't. Uh but what's happening right now is there's you know a hammer being hit across all names and not you know specific individual names that might not be using it as well. Um and

then the second point at least you know my view is there are a number of applications that you know on paper sound really interesting like oh AI could just rebuild Slack or it could rebuild Salesforce or could rebuild you

know X Y and Z. I think you know the it's not just the product it's the way that's integrated across multiple services and systems across the enterprise that is a lot more difficult

to just replicate >> than I think some of the public markets are are kind of reacting to >> and I do think there's um you know fundamental question in addition to what you said which I agree with of like does

anybody want to rebuild it and own it and uh you know there are to your point of like within the software sector in particular Um there are companies where

uh uh they're structurally more protected than there are companies that are at more risk, right? And I I think it's as simple as like you got to go select.

>> Yeah, exactly.

>> Um this has been so fun. Thanks so much, Neil.

>> Yeah, I really appreciate.

>> Congratulations on all the innovation and uh on building out all the compute.

>> Awesome. Thank you. Good to be here.

>> Find us on Twitter at no prior pod.

Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new

you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-briers.com.

Loading...

Loading video analysis...