Sequoia Partner, David Cahn: Who Wins in AI, Defence & Is T2D2 Dead?
By 20VC with Harry Stebbings
Summary
## Key takeaways - **Physicality is AI's moat, not just bits**: The physicality of data centers, including steel, power, and electricians, is a critical moat in AI development. Thinking in terms of 'atoms' rather than just 'bits' is essential for understanding AI's true impact. [01:36], [01:59] - **AI bubble is consensus; focus on survivors**: While the AI bubble is now a consensus view, the crucial question is who will survive it. History, like the dot-com bust, shows that even after a bubble, strong companies like Amazon can emerge and thrive. [10:47], [11:18] - **Consumers of compute win in AI bubble**: Consumers of compute benefit from AI bubbles because overproduction leads to lower prices and increased gross margins. Producers of compute, however, are in a commodity business where prices can fall, making it harder to control their destiny. [15:02], [15:41] - **Monopolies are rare; AI's is visible**: Unlike the Big Tech era where monopolies hid in plain sight, AI's potential is widely known, leading to intense competition. This makes sustainable monopoly profits unlikely in the AI era, which is beneficial for consumers. [17:39], [17:47] - **Young AI talent is undervalued**: Companies underestimate 23- and 24-year-olds in AI, despite the fact that the field is new and the playing field is level. Dynamism and the ability to learn are more valuable than experience in this rapidly evolving landscape. [49:56], [50:40] - **Defense is the next AI wave, but a narrow category**: Defense is poised to be the next major AI wave, analogous to the period after the transformer paper. However, it will likely be a concentrated category with only a few national champions, rather than a broad ecosystem like SaaS. [58:58], [01:06:49]
Topics Covered
- AI's Physicality: From Bits to Atoms and GDP Impact
- The $600 Billion Question: End-User Demand for AI Compute
- Construction as a Moat in AI Data Centers
- Visible vs. Hidden Risk in Hiring: Preferring Transparency
- Defense is the Next AI: A 50-Year Catch-Up
Full Transcript
I do think we're in an AI bubble. You
can see the fragility. Everybody can see
the fragility. The thing that I think is
more interesting is who's going to
survive the bubble. Consumers of compute
benefit from a bubble because if we
overproduce compute, prices go down,
your COGS goes down, and your gross
margin goes up. The lesson that punches
you in the stomach in venture is you
can't make a company succeed. How would
you respond to Sequoia were asleep at
the wheel when it came to defense not
being in Helsing and Andre, the two
clear market leaders in the category? I
would say
ready to go.
[Music]
David, I love your writing. Our episode
last year was one of the most downloaded
shows. I had like the CMO of Meta tell
me that it is the single show that he
has forwarded to more people and sites
more often than any other. Not to make
you, you know, nervous or set the
pressure for this episode, but thank you
so much for joining me again, dude.
Thanks for having me, Harry. You were
always very kind.
>> Now, the year of the data center sounds
wonderful. We had an amazing discussion
last year. What did you predict last
year David that happened and we are
seeing in action now? I think there's
really so we talked about last year this
concept of steel servers and power and I
think if you remember you know rewind to
summer 2024 the big conversation at that
time was compute models and data that's
what everybody was talking about and I
sort of had this view that everyone was
underestimating the physicality of these
data centers I'm on the front lines I'm
talking to people every day and you you
know you talk to people they're flying
electricians to Texas and they're trying
to buy out generator capacity and you
know generators are sold out until 2030
and And so how do you get in line and
how do you do that? And so I sort of had
this sense that people were thinking
very abstractly sort of in a in a bits
perspective about AI but they should be
thinking in an atom's perspective about
AI. And I think that prediction came
true in two ways. Uh the first way is
the best trade of 2025 was the AI power
trade. A lot of Wall Street people made
a lot of money betting on the fact that
power was going to be the constraint and
we're going to move away. You know you
hear Sam Alman now talking about
gigawatts every day. He's not talking
about dollars anymore, right? So we're
moving away from dollars and we're
moving toward gigawatts. And I think
that transition has fully happened in
the last year. The second way I think it
was right and I saw, you know, it's
funny now like a year and a half later
you see this on the cover of the
Economist, on the cover of the Wall
Street Journal, on the cover of the
Atlantic. The mainstream media has now
really picked up on this narrative of
the physicality of AI is what translates
to GDP. I mean GDP is an imperfect
metric and it generally captures
physical things more than virtual
things. And so GDP now is picking up all
of this construction boom that's
happening, all this steel that's getting
created, all of the phys physical stuff
that's happening in the AI data centers.
And you're seeing these stories which I
think are true, which is AI is now one
of the biggest contributors to GDP
growth in the United States. And so I
think that's the second way in which
that prediction has played out.
>> Does its contribution, you know me, I
just go rogue and go off-grid, but it's
much more fun. Does it contribution to
GDP growth go contra your $600 billion
question in terms of where the revenue
will come from?
>> Well, the $600 billion question and
maybe just to remind folks what what
that is. I mean it's basically a very
simple equation that says if we invest
and this was 2024 when I wrote this if
you invest 150 billion in uh Nvidia
chips that's about 300 billion of data
center investments and to pay that back
the person using the compute needs to
earn a 50% gross margin. So there's
about 600 billion of revenue that needs
to get generated. If you redo that
analysis in the summer of 2025, it's
about 840 billion. So it's it's grown,
but it hasn't grown dramatically. And so
the question behind the question was, is
the customer's customer healthy? We know
that the customer is healthy. We know
that people are buying all these data
centers. We know that people are
building these data centers. We know
that those stocks have all gone up. We
can see that. But is the customer's
customer healthy? Is there actually an
enduser for this compute? I don't think
that's been answered. And I think that
the uh the question last year, which was
the valid question, was if everyone's
spending all this money, it hasn't
showed up yet because people haven't put
the shovel in the ground yet. I
literally wrote a a piece last summer
called AI is shovel ready. You know, the
shovel is going to start hitting the
ground. And so now the shovel is hitting
the ground. We're mid construction on a
lot of these projects. One of the
predictions I made last year, in
addition to saying it was going to be
the year of the data center in 2025, I I
said, "Hey, we're going to have these
construction delays. We're going to have
issues now in building out these data
centers." and the information has done a
very good job of reporting on this, but
I think we're at the beginning now of
seeing some of that play out as well.
>> Are we going to see a mass proliferation
of delays on data center construction?
Do you think
>> I think we're going to see variability?
One thing I'm always interested in as an
investor is like there's winners and
there's losers and there's variability
and I'm very skeptical when any and
whenever anyone tells me like everybody
is going to win or everybody is going to
lose or everyone is going to do anything
like there's always variability. Imagine
a race. You have a track race. Like
there's somebody in the front and
there's somebody behind and someone's
faster than the other person. And so I
think with data center construction, one
of my core perspectives that I've been
developing over the last 18 months of
writing about this is that construction
itself is going to be a moat. The
ability to build things is hard. And I
think we underestimate that. And I think
we continue to underestimate that
because we sort of say, oh well it's
fine. Like everyone's going to do it.
The timeline is two years. Okay. But
like there's a lot of complexity that
goes into that. And by the way, the
complexity compounds when everybody is
doing the exact same thing at the exact
same time and everyone is trying to buy
from the same vendors. And I've written
a lot about the AI supply chain for that
reason because you really need to care
about not only okay, Meta and Google are
both building a data center. But who's
the guy that they're calling and who's
the guy that he's calling? And you got
to follow it all the way down the supply
chain to get to the core of really
what's going on.
>> There's so many things I want to unpack
within those. I do want to go to what
did you not predict or foresee that did
play out that you were surprised by?
>> I think there were two big misses uh
last year. I think the first big miss
was the these like big talent
acquisitions. I mean I think that if you
had asked me the probability a year ago
that you know if you're a 25-year-old
recent grad from an elite university who
is perceived to be an AI expert, you can
get a $50 million pay package right now.
And if you are a brand name that
everyone recognizes your name, you can
get a billion dollar pay package right
now for a single individual. I totally
did not see that coming. And I think
that you asked me a year ago to predict
that, I would have said you were crazy.
So sometimes I I do think the beauty of
AI is like reality is stranger than
fiction and a lot of crazy things
happen. The second before we move to the
second, do you think those scaled pay
packages are justified?
>> I think they're symbolic of this sort of
desperation in the ecosystem where it's
like we need to ek out progress. We need
to prove that all these investments are
worth it. And I think there's this logic
that gets really abused in the venture
world and in the tech world which is
like hey if I increase the probability
of making a trillion dollars by 1%
that's worth ton of money right that's
worth$10 billion and sure that's true
but it's very easy to overestimate the
1% is it 1% is it a hundth of 1% is it a
thousandth of 1% is it a 1000th of 1%
our brains are very bad at reasoning at
that scale of number and so I think to
the extent that you believe that hiring
this very impressive researcher
increases the probability you win by 1%.
I totally can see why you will justify a
billion dollar pay package for an
individual. That said, I think we are
psychologically biased to overestimate
what that percent contribution is. And
it may be the case that there's these
broader macro variables, which we'll
talk about, I'm sure, later in this
discussion. There's these broader macro
variables that are actually driving
progress in AI that are uh that are not
a single individual can change.
>> I'm very upset looking at these pay
packages that my mother didn't push me
towards a more engineering heavy uh
design. everyone feel that way? I think
that's like probably the universal
reaction to seeing these packages, man.
>> I I'm like, "Mom, you should have done
better." Bad parenting. Uh you
encouraged me to do English. Really? Um
come on. Um yeah, War in Peace doesn't
quite make it, does it? When you're
getting paid three and a half billion by
Zuck. What was the second?
>> I think the second one, you know, one
thing we talked about on the podcast
last year, I predicted that Meadow was
going to do really well, and I think
that prediction was clearly false in a
12-month time horizon. Um, I thought
that the vertical integration that Meta
had was going to be an advantage and I
think that Meta, you know, these 100
million packages are coming in large
part from Meta because they haven't
performed as well as they thought they
were going to. The reason I thought Meta
would do well is that it was vertically
integrated and found to run. And I I
sort of continue to believe that in the
fullness of time it is possible and I
think the dramatic actions that Zuck is
taking represent this. It is possible
that I will be proven right in a longer
time horizon, which is to say that
Zuck's going to fix the problem. It's
amazing what founders can do. He's so
focused on this. It's he's spending all
of his time on it. But I think if you
look back a year ago at the prediction
that Meta would do well, I think you
would say wrong.
>> Have you changed from a buy to a salon
matter?
>> I think the dramatic action that Zuck's
taking represents just how deeply
invested in this he is. And I think it
also shows us what founder CEOs can do
and why founder CEOs are different than
non-founder CEOs. I mean there's all
these studies of like if you just invest
in the basket of founder CEOs you will
outperform the basket of non-founder
CEOs and I think what Zuck is doing
represents that and so I remain
optimistic about Meta long term.
>> You said about the vertical integration
there being part of like your thesis I
totally agree with you and was probably
shaped by hearing you to be quite honest
David you said to me data center and
model teams need to be coupled kind of
going to the vertical integration
element.
Do you stand by that? How do you think
about that when hearing that today? And
does open AAI and Anthropic not having
that vertical integration challenge
that?
>> Well, I think the simple version would
be OpenAI and Anthropic are now steel
servers and power companies. And that's
like a big change that's happened in the
last 12 months. And so I actually think
where, you know, in many ways OpenAI and
Enthropic are becoming more and more
vertically integrated every day. You're
seeing a lot of announcements around
them developing their own chips. uh
they're work, you know, every day you
hear Sam Alman talking about gigawatts
of power and procuring his own power and
so I think you will continue to see the
big labs moving vertically down the
supply chain and that's been one of the
biggest trends of the last 12 months.
>> Do you think we will continue to see
that? We saw poolside recently announced
a 2 gawatt data center that they're
building out in conjunction with
coreweave. Do we think all model
providers will need to be vertically
integrated in this way? I think that
competitive pressures will push all of
the model providers to spend more time
on this and to have teams focused on
this. So I think the answer is yes. I do
think that this is a trend that is going
to be durable.
>> When we think about where we are today,
everyone says bubble. You've heard it.
I've heard it. It's on my Tik Tok. Do
you think we're in an AI bubble?
>> I do think we're in an AI bubble. I also
think to your point a year ago when we
had our last conversation, it was a very
contrarian thing to believe that we're
in an AI bubble. Today it's a very
consensus thing to believe we're in an
AI bubble. I mean Sam Alman, Venode
Kosla, Jeff Bezos, like some of the
biggest AI bulls have now come out and
basically said, "Hey, we're in a bubble
of some some way some sort of the
other." And each has their own
perspective on exactly how that's going
to manifest. So I think right now the
bubble conversation has sort of reached
kind of full consensus. The thing that I
think is more interesting is who's going
to survive the bubble? What's going to
come next? And so I think there's two
components to that. Number one, who are
the winners and who are the losers? If
you remember from the dot com, a lot of
companies from the 90s still did well.
Amazon still became an amazing company
after the.com bubble. So I think there's
an opportunity for winners to continue
to do well after the bubble. And I think
the second thing that's really
interesting is just timelines, right?
Like a lot of us, you know, I've always
said like my core belief is that in 50
years when you and I are 80 years old,
AI is going to have completely changed
the world. It's going to dramatically
reshape everything about society. And so
if you take that time horizon and you
say okay AI is this tremendous
tremendous technology innovation. It's
the most important thing that's going to
happen in our lifetimes probably it's
going to be among the most important
thing that's ever happened in human
history and in the history of this
planet. Right? So it is this amazing
thing and yet the market is implying
some probability that all of this is
going to happen in such a short time
horizon with a very specific chipset and
all of this stuff. And so I think
unpacking the tension between AI as a
long-term winning trend and a long-term
generational change and a short-term
market cycle that will incinerate
capital. I think that's the second kind
of area that I think is is really
interesting.
>> How do you balance that being an
investor today, David? Like play the
game on the field, the Bill Gurly quote,
but then also the awareness of the
long-term impact that will come over
multi-dead.
>> I think it's tricky. Um, and I think the
one benefit I have is I've been
investing in AI for about eight years.
And I so I've been able to, you know,
for me this is not like, hey, this is
like a 12-month thing where you're like
running and have this FOMO to get into
AI. I started investing in AI in Weights
and Biases series A when everyone said
deep learning was going to be tiny. It
was a year after the transformer paper
came out and everyone said deep learning
is a tiny market. Why would you invest
in this company? And of course, they had
a a really nice exit to uh to Core
recently. I invested in Runway ML when
stable diffusion hadn't even been born
yet. And everyone was saying, "Oh,
Transformers is the only way." And of
course, stable diffusion introduced a
new model architecture. And I invested
in hugging face, which I still remember
the first meeting I ever had with Clem.
It was the, you know, he had launched
this transformers library. It's funny
now transformers on the tip of
everyone's tongue, but that time NLP, it
was NLP, by the way. It wasn't AI at
that time. And he had this amazing
transformer library and for folks who
are steeped in AI, it was a successor to
BERT and this old school of NLP models.
So I just say that to say that um I
think when you take a long enough time
horizon in AI over the last eight years
you have more opportunity to find
investment opportunities. It's not about
finding 10 investment opportunities. At
least for me I don't need to find 10
investment opportunities this year. I'd
like to find one or two investment
opportunities a year that I really love.
This year I've invested in Clay which I
think is an amazing application layer
company we can talk about. I invested in
Juicebox which is building an AI
recruiter that has tremendous love. And
so I think you can find exceptional AI
companies that I believe will do really
well over the long time horizon and will
continue to succeed for decades and
decades to come. And one thing I asked
myself before I make every investment
is, is this company going to succeed in
spite of market volatility? If your only
way that your company's going to succeed
is that it can raise infinite capital in
a cheap capital market, that's very
difficult. If you have real customer
love and you've built something that
people absolutely need, you're going to
be able to navigate through any market
environment. And by the way, we've kind
of seen that now with all of these 2021
companies navigating that environment.
Some of them came out really strong on
the other side. Look at data bricks. 60
billion now 100 billion valuation. So
you can come out the other side of
market cycles if you have compelling
product market fit, a great team, a
great founder.
>> So David, when we play out your question
there of the winners and the losers,
just so I understand that, who do you
think the winners and the losers will be
when we look back on this last 12 to 18
months?
>> I've had a very simple framework for
this. It's actually I think probably the
first thing I ever published in AI and
AI's $200 million question way back when
in 2023. And um the framework is this.
Consumers of compute benefit from a
bubble because if we overproduce
compute, prices go down, your COGS goes
down and your gross margin goes up. So
I've had the view that you want to
invest in consumers of compute.
Producers of compute. Imagine you're
producing any commodity asset. If other
people produce a lot of that commodity
asset, it doesn't matter. It has nothing
to do with you. You might be running the
best operation possible. You might be an
amazing business person, but if
everybody else starts producing the same
commodity asset, prices go down. And so
it's very hard to control your destiny
in commodity businesses. By the way,
this is why commodity businesses tend to
trade cycllically and tend to trade at
lower multiples than non-commodity
businesses. So I think if you're a
producer of compute, you're
fundamentally in a commodity business
just like an oil company is in a
commodity business and that is going to
trade it different way and that is going
to have more cyclicality than than if
you're in a non-commodity business
consuming the commodity consuming the
energy and producing intelligence on top
of that. And so I think if you're
consuming this raw resource which is
power and you're producing intelligence
and doing something that people love
with that intelligence, those are the
businesses that are going to do well on
the other side of this market cycle.
>> Are three of the best businesses not
commodity businesses in the form of
Google Cloud, AWS, and Azure.
>> I love this question. So let's talk
about it. I think it's really
interesting. One thing I've written a
lot about and you and I have talked
about this is like game theory and these
big companies. And one of my core
beliefs or one of the things that I
think is underestimated in the market is
that we're living in an anomalous
monopoly era. And it's funny because
there's so many comparisons to
industrial revolution and in some ways
we're living in this new guilded age.
And we have these seven companies and
they represent 40% of the S&P 500 which
is just mind-blowing. And they have
these amazing monopolistic businesses
and um and these businesses are cash
cows. And um and I think people
extrapolate from that and they say, "Oh,
all businesses are monopolistic." I
think people have a mental model that
implies too much monopoly and not enough
commodity. And what I think people
underestimate about the big tech
companies is that when the big tech
companies were founded, when Google was
founded, nobody thought it was going to
be a monopoly. Think about YouTube
selling for a billion dollars. I mean,
that would be crazy if you had known how
big all of this was going to be. So,
nobody knew that Google was going to be
monopolistic. And you can build
monopolies when they're hiding in plain
sight. Nobody can see them. And so you
build this monopoly and you don't have
that much competition. AWS is the same.
You mentioned AWS. Nobody knew that the
cloud was going to be this tremendous
opportunity when AWS started doing this.
And to their credit, that's why they
have the biggest market share in the
cloud business. And that's been very
durable for them. And so I think when
nobody sees the monopoly, you can build
a monopoly and then you can extract
margins on the other side. But AI is so
different. Everybody knows that AI is
going to be big. Like this is I think
the irony of the AI is that everybody
knows AI is going to be massive. But if
everybody knows something's going to be
massive, then everybody builds
companies. And if everyone builds
companies, there's tremendous
competition. And so I think the
difference between the AI era and the
big tech era. And it makes sense why
everyone is overindexing or overtraining
on the big tech era because that's the
era we live in. But the difference is
that these monopolies are not hiding in
plain sight. We all now know that if you
build an amazing tech company, it can be
worth a trillion dollars. In 2000, if
you told people that they could have a
trillion dollar tech company, they would
have laughed you out the room. And so I
think the market environment in which
these companies are getting built is
dramatically different. And monopoly
profits are unlikely to exist. And by
the way, final point on this, that's
good for us. That's like good for
everybody. Like we shouldn't want
monopolies to exist. Monopolies are the
are bad for the consumer. The consumer
wants to get things for free and the
consumer wants to get things for the
cost of capital. And I think that to the
extent that there are not monopolies in
AI, that's much better for how AI is
going to evolve in a healthy way than if
it evolved in in sort of a monopolistic
direction. You said about kind of
consumers of compute will win. I like
that. But respectfully, it feels
relatively accepted in venture
ecosystems for sure in a way that your
bets before weren't. Weights and biases
weren't. Runway wasn't. Hugging face was
kind of a kind of weird community play
at a point. What do you think is obvious
to you that is not obvious to the rest
of the community today? When I first
started saying this 18 months ago, it
was definitely not consensus. And so one
thing that is tricky in the business of
ideas is that as soon as the idea
becomes accepted, it was always obvious,
but in the moment where you propose a
contrarian idea, you know, everyone
everyone kind of criticized it. So I do
think it's been interesting to see um
see the change. And then by the way,
that people who had the wrong opinion
very quickly change their opinion such
that they were they weren't actually
wrong. Um and so anyways, I think the
the idea game is a is a tricky one. Um,
I think that, and the second thing I
would say to that is while people say
they believe this, and you and I talked
about this on the podcast last year, you
probably remember this. Everyone says
they believe this and then you look at
these pitchbook charts where it's like
where's the dollars going? And I think
probably 80% plus of the dollars in AI
are still going to producers of compute,
not consumers of compute. So I do think
you're right that it's an accepted
narrative, but the producers of compute
consume so much more capital than
consumers of compute. That if you are in
a capital deployment strategy and you're
trying to deploy as much capital as
possible, you have to invest in the
producers of compute. And I think that's
one of the dangerous things in investing
which is that you have this there's this
almost like incentive to invest in
people who consume more capital because
they're calling you every day. And the
people who don't consume capital don't
want to raise capital. And I think some
of the best investments are those
companies that don't want to raise
capital. When Sequoia invested in Zoom,
they didn't want to raise capital,
right? They were profitable. They were
doing really well. Those are the
businesses that I think as an investor
you really have to focus your time on.
>> I spoke to Sonia on your team beforehand
and she gave me a fantastic question.
She said, "If this is a game theoretic
bubble, is there a coordinating
mechanism for the spending to stop and
the bubble to pop?"
>> You know, I love game theory. So, I
mean, my my basic framework on AI, and
this is actually kind of how I write all
these pieces, is there's like 10 players
around this big chess board and they're
extremely powerful and each of their
moves affects the other people's moves.
So, it's kind of recursive and and so
you sort of have to think first order,
second order, third order. How do how
does my move affect other people's
moves? And these are very sophisticated
players doing this. And so, one what my
the simple answer to your question is
it's it's not coordinated. That's the
beauty of the invisible hand. That's the
beauty of people's incentives. These are
big companies that are acting out these
incentives and so I think until the
incentives change the behavior is not
going to change and so there is no
coordinating mechanism. I I do think
that's one of the surprise it's always
the surprising fact of capitalism like
everyone wants to believe that
everything is kind of coordinated. It's
easier for our brains to gro everything
being coordinated but I actually think
it's it's pretty uncoordinated and
incentive driven.
>> If we think about you said earlier it is
definitely a bubble and we're seeing
this consensus across the different
visionaries in our ecosystem. If it's a
bubble, does it pop or does it deflate
and how do you expect that to play out?
>> So, I'm a student of Nim Taleb and I
will lean on Nasim Taleb's sort of he's
a hedge fund investor and philosopher
and he's written fooled by randomness,
anti-fragile, black swan. I think these
are books that a lot of folks will be
familiar with and really influential
books in the investing world. and his
philosophy and and he says this in
anti-fragile is, you know, it's really
hard to know if a building is going to
fall down, but you can see when it's
wobbly. And so you can't really predict
when the wobbly building falls, but you
can notice the fragility. And so I think
my perspective on AI right now is uh you
can see the fragility. Everybody can see
the fragility.
>> Can I ask you what specifically makes
you say you can see the fragility? Well,
the circular deals I think the circular
deals dynamic is probably when I think
about why did why did this AI bubble
narrative go from contrarian a year ago
to consensus today. I think the main
thing driving the consensus is these
circular deals and the big tech company
dynamics. Let me let me unpack that.
>> A year ago, hyperscalers were holding up
the AI ecosystem and everybody felt very
comfortable with that because everyone
knew that these were very robust
businesses. Microsoft and Amazon
specifically were driving the vast
majority of the AI capex growth and they
were explicitly saying, "Hey, we're
going to buy out your generator capacity
for five years. We're going to sign a
20-year lease on this data center and
we'll back it up with our credit." So,
they were basically putting themselves
in front of all the risk. And the way I
thought about it a year ago and wrote
about it a year ago is like they're
almost grabbing the hot demand hot
potato and saying like, "It's it's ours.
Don't worry about it. We got this
covered." A year later, Microsoft and
Amazon have really stepped back. And
this started and again the information
has done a really nice job reporting on
this. This started in uh the beginning
of the year there was this big uh public
announcement or or leak or whatever you
want to call it where uh Microsoft
walked away from two data centers. And
it sent a message to the market like hey
we're not stepping up. We're not going
to take all the risk on everybody else's
behalf. We're not going to be this this
this risk absorber in the ecosystem
anymore. And then what happened later
this year is Oracle obviously stepped up
and took on a huge amount of the compute
demand and core has really stepped up
and taken on a huge amount of the
compute demand. And so you have this
shift from Microsoft and Amazon to
Oracle and Coree. And then the second
order effect of that is that Oracle and
Core are a lot smaller than Microsoft
and Amazon. They simply can't absorb as
much risk as Microsoft and Amazon could.
And so the chip companies are now
stepping up and saying, "Okay, we'll
absorb some of the risk. will put in the
capital to finance this buildout where
the demand on the other side is not so
clear because of course the chip
companies also get to book this as
revenue. So their their cost of capital
is very low. One might even say their
cost of capital is negative in some of
these deals. And so it's the it's the
cheapest capital available. And so
moving from, you know, expensive capital
from these big tech companies to cheaper
capital from the chip companies
themselves who get to benefit from
circularity. Um, I think that's probably
been the biggest change in the last 12
months in AI. And I think that's
something a lot of people have observed.
It's it's fairly, you know, obvious. And
so that I think has changed a lot of
people's minds.
>> Do you think these deals are priming the
pump, so to speak?
>> I think all of these deals now are
priming the pump. I mean, you basically
announce the deal, they're 10 or 20%
funded, and then you have to go raise
capital to to to fund the rest of it.
And so, you know, everyone announces
these deals in gigawatts, not dollars
anymore. And I think most people don't
know how many dollars a gigawatt is. And
so the rough math is, you know, a
gigawatt is $40 billion to to build out.
Jensen says it's 50 or 60 if you use uh
the next generation Vera Ruben chip. So
let's say it's somewhere between 40 and
60 billion. So a 100 gawatts of power
build out, which is what people are
talking about now. That's eight that
would be AI's $8 trillion question. And
then 250 gawatts of power is AI's 20
trillion question. So, we've totally
upped the ante and the magnitude is just
is just much much bigger, but of course
that's not funded. Um, and so I think
the funding for these deals is is going
to be an important thing that has to
play out.
>> How do you read them? When I hear you
speak now, I I feel very concerned. Like
I think is there even the capital supply
in the world for these? You know, we've
heard about Sam Olman and the trillion
dollars that he needs and requiring the
same energy as Japan. And you're
actually looking at that going, "Well,
not even the sovereigns have enough
money for that actually."
>> Well, we're living through this amazing
moment, and I do think it's precarious.
We're living through this amazing moment
where like the entire capital market is
just AI, right? I mean, 40% of the S&P
500 is these big tech companies. They're
all basically trading on AI. Uh, private
capital is all targeted AI. And so, I do
think the world's capital machine is
directed in a single direction. I think
the risk is that it's all focused on a
very constrained period of time. I
actually think in the fullness of time,
it's not that risky. like these things
are going to play out. We're going to
get these amazing AI is going to be
amazing. We're going to get these huge
technological breakthroughs. Tremendous
revenue is going to get created. It's
going to be a big driver of the economy.
The problem is and and the simple way to
think about it is it's all B100s and
H100s. And what if it actually takes
three years and it's the Reuben chips
that get us there or it's the Fineman
chips that get us there, which is the
2028 chip, right? So, I think again it
comes back to where we started, which is
the physicality of AI. You can't just
say like, "Oh, I'm upgrade my chip.
Great. It's not my fingers. I've
upgraded my chip. No, you have a a giant
warehouse sitting with these chips and
they might be legacy chips and maybe
it's going to take us 10 years to get
there instead of two years to get there.
And I think that is kind of the risk
that the financial ecosystem is taking
on. Whereas, as an AI investor and an AI
believer, I'm like, we actually just
need to spread that risk over a longer
period of time and a long and a greater
number of of bets.
>> Oracle is one of the biggest players
that we've seen enter the market as you
mentioned there. when you look at their
like debt to equity ratio traditionally
considered very very high, do you not
think they're out over their skis?
>> I think that one narrative that I have
been uh thinking about a lot is this
narrative that I think a lot of the
media has also been painting of like hey
debt is going to unwind the AI bubble
which is to say a lot of these AI
investments are debtunded and the
problem with credit is that credit
unwinds and then when you have a credit
unwind a lot of bad things happen
actually that's not the way it's going
to play out which is maybe surprising.
Um, I think that the reason people are
so anchored to this sort of debt debt
narrative is that 2008 was a debt credit
unwind and people understand how messy
credit unwinds are. I actually think
that what's interesting about this AI
buildout is that for the most part and
let's put Oracle aside which maybe has
some debt but for the most part the AI
buildout today has been equity funded
and cash funded. And so I think it's
actually is you know every every bubble
looks different and every unwind looks
different and I think we always sort of
overink on the the lessons of the past.
What I think is going to be interesting
if to the extent that the bubble unwinds
at some point, it's going to be an
equity unwind. And what that looks like
is 40% of the S&P 500 is basically a bet
on AI. And so to the extent that the bet
unwinds, stock prices go down. And
what's different this time again versus
2008 is more Americans, you know, a
greater percentage of Americans net
worth is equities than I think ever
before in history. And so people are
going to feel this in the form of their
equity portfolio going down more likely
than some credit unwind where the banks
get affected and all of that stuff.
>> Are you as concerned as I am by the
concentration of value in Max 7? It's
not a and again if I'm pushing you on
company specifics, dude. I mean I I
really I'm not a journalist in any way.
Like I have zero I have zero desire to
get a clickbait answer but like I look
at the concentration of value in MAG7 as
a as a class or cohort and I am worried.
>> Yeah. you I was sitting down yesterday
with um Sandy Norn who's the author of
this book the engines that move markets
which is one of the all-time great tech
investing books and we were talking
about AI and we were talking about
markets and he sort of made this
comparison to Japan in the '9s where
basically if you didn't invest if your
portfolio was not leveraged to Japan in
the '90s then you were like the best
performing fund in the '9s and that it
was I think he said and this was like
really surprised me he said that Japan
was basically 43% of the equity market
and the US was 41%. And so it was really
really a huge percentage of the market,
right? And that really unwound. And so I
think you have a similar dynamic here
where the Mag 7 are just a humongous
portion of the market. Now these
companies are great. They have cash
machines like they're going to do fine.
But I do think we should be concerned
that these companies represent such a
huge fraction of the market and that any
change in the AI narrative really
affects them.
>> I I want to discuss you mentioned
earlier in the conversation and we
mentioned that concentration in value 7.
A lot of that's p predicated around the
belief that it will impact GDP GDP
meaningfully and we touched on it
earlier. Massa said that he thinks that
we'll see 5% GDP impact. How do you
think about and respond to the magnitude
of which we will see AI impact GDP and
productivity levels?
>> So I think Masa makes an interesting
point here and I actually agree with him
fundamentally that AI is going to affect
5% of GDP. probably where I disagree
with Masa. So I think he he used the n
trillion dollars. I think that's the
number he used. It's going to disrupt n
trillion of GDP. And then he says his
next assumption is it's going to that's
gonna there's going to be a 50% profit
margin and then it's going to be $4
trillion of economic profit. And I think
so I agree with him. It's going to
affect 5% of GDP maybe more in the
fullness of time. Um, but I think this
comes back to the point we were
discussing earlier where people uh
overestimate the monopolistic nature of
businesses and that we're living in this
sort of unique guilded age monopolistic
era and that that is is not the steady
state of business. And I was reading I
found this McKenzie report recently
which said that if you look at total
global GDP
1% of global GDP is economic profit
above the cost of capital which I think
is surprising and I think again confirms
this intuition that I think some people
that that I think is important which is
for the most part GDP accrs to the
regular people working people who get
wages and salaries and um it is very
hard to sustain an economic profit above
your cost of capital and again to to to
moralize for a second like that's a good
thing. Uh I do think that's really good
and I hope that the economic benefits of
AI acrue to everybody and not just a few
companies
>> in terms of overestimations.
I was just chatting with Rory uh
Odriscoll from scale and Jason Lin who
we have our weekly show and they
actually said that the biggest problem
with today is we're seeing this
overestimation of demand. they were
specifically talking about legal um
where every law firm is looking for an
AI provider today because they've been
told look for an AI provider that will
not be the case next year and the year
after and so it is a atypical market
cycle where 100% of market is looking
for a new provider or a provider where
normally it would only have been 5%. Do
you think that's a fair description?
>> I think that we are I think there's a
number of things that are over being
overestimated. I think the most
important one is the timeline. And I
mean, you've probably seen there's a lot
of commentary now in the last few days
about this like AGI timeline getting
pushed out. And um and you know, this is
something I've been talking about for
the last like four months and because a
lot of the leading indicators were there
in June, July, but this did change over
the summer. So, it makes sense why
everyone's talking about this right now,
which is in June or July, Andre Karpathy
at Y Cominator said, "Hey, we're in for
the decade of agents as opposed to AGI
in 2027." And uh a few weeks ago,
Richard Sutton was on the Dorces podcast
and basically explained why. And and
Doresh, I think, has been doing a good
job of fleshing out why the current
technology paradigm is not enough
potentially to get us to AGI. Um and so,
and then Sam Alman came out, I think
also in June or July, and said, "Hey,
it's going to be a more gentle
singularity. I've actually been
surprised by how, you know, uh gradual
the change has been as opposed to being
sort of this this crazy change." And so
for me, there's this contrast between
what I I think of as like the lunchroom
conversation at these big labs. Like you
have these 25-year-olds sitting around
lunch being like AJI is 100 days away.
No, it's 200 days away. No, it's 300
days away. And like the highest status
person is the person who says it's 100
days away because they're the most
aggressive. And and then but you
contrast that against like the true
thought leaders and and godfathers of
AI, the people who really invented this
category, people like Richard Sudden,
people like Andre Karpathy, people like
Ilia Sutzk, who said in December that
pre-training is dead. And those people
think, hey, the timeline's actually like
20 years, 30 years, etc. And so I think
that contrast is probably the biggest
thing that's being underestimated. And I
think the irony of that is that it's
actually the people who are the
forwardthinking leaders who sort of led
us down this path. Like the path we're
on was invented by these people who are
raising the most concern or saying the
timeline is longest. And it's the people
who've been in AI the shortest who I
think are saying like hey it's going to
come tomorrow. And I think there's sort
of this experience curve of these things
are just hard and they take time. And by
the way, just to say this because it's
so important, it's like if this happens,
it's a cataclysmic event in the history
of our species. So it doesn't really
matter if it happens in 200 days or 50
years. What matters is that it does
happen. I almost feel apologizing
because you're so smart and intellectual
and then I'm like, "Yeah, well vent your
baby." Um, but like king making is a
real thing. making one person the
anointed winner with a large amount of
capital distribution and brand Allah
Harvey is a very real dynamic that we're
seeing play out. How do you balance that
the importance of king making today with
the long cycles the decade plus that
we're talking about there?
>> I don't believe in kingmaking and that's
maybe a controversial thing to say. I
think one of the lessons you know you'd
think like oh sequoia should be able to
kingmate companies and like that's so
great and that would be by the way if
that was true it'd be really
economically valuable for our LPs if
that was true and I don't think that we
think that's the case. Uh, and I think
if anything some of the hardest learned
lessons in this business are like you
think that your capital is going to
change the business. It's not. It's not.
Fundamentally the founder has to be
amazing. The idea has to be amazing.
Product market fit is be to be amazing.
Maybe we can help them navigate a few
difficult decisions along the way. And
we like to think of ourselves as company
builders. But I think the lesson that
punches you in the stomach in venture is
you can't make a company succeed. The
company has to already be successful.
And then I think the second order effect
of that is like you should be humble
because the company succeeded not
because of you. The company succeeded
because of the founder and maybe you
helped a little bit but um you can't
make companies succeed as a as a venture
capitalist. And I think that um ego gets
in the way where people think they can
and I just don't think they can. So you
don't think in a market like profound
that Sequoia and the subsequent quick
round has helped them significantly get
great talent, get great customers and
get subsequent funding which is then
widen the moat between them and the
plethora of other people. I I'm sorry I
I love you but I respectfully disagree.
>> I think that there are flywheel dynamics
for sure in venture and so I'm not
saying that having I I think having your
cap table makes your company more
successful. So, I'm not saying that
having a brand name great VC who's going
to work really hard on your cap table
doesn't change the probabilities. I just
think it changes the probabilities less
meaningfully than people think on
average. And I think that, you know, you
use Profound as an example because I was
in the pitch when they came to the IC.
The business was ripping. It was an
amazing business. They had tons of
customers lining up at their door to buy
the product. And so, yeah, we're lucky
to be in business with them and we're
grateful to be in business with them.
And I hope that we can shape their
journey in some way. And if there's five
engineers that join and Sequoia help can
you know help having Sequoa involved to
help them join phenomenal and by the way
I think that's the number one way that
companies do benefit from having on the
cap table is that is talent and
recruiting and we can talk more about
that and I'm I'm fascinated by
recruiting and recruiting dynamics. So I
do think Sequoia helps with that. It
especially helps with folks who are more
mimemetic where I think the the the
brand name really helps. That said, I I
just resist the idea that like, oh, you
know, I think this is just something
that you learn the hard way in this
business. Like, oh, I'm gonna put 20
million in this business. Now, it's the
Sequoia company in this space and
suddenly it's going to succeed. Like,
no, I don't. It doesn't work that way.
We've learned that the hard way. And I
think we in our investment committee
conversations, we really resist that
because I think that is how you make
mistakes in venture.
>> So funny. I remember when I interviewed
Doug and he was like, people think that
like cuz we're Sequoia, everyone just
comes and says, "Uh, here you are.
Here's my deal. You must have it. Take
it." And he's like, "I wish. I would
love that. It's not how it works. Like I
have to fight and fight and fight." And
I'm like, "Yeah, your biceps are
bulging, Doug. I totally believe that
you have to fight for the for every
deal. It's all good." Um, you mentioned
a couple of companies that you work
with. The common critique posed to
consumers of compute is margins, margin
structure, unhealthy margins. Do margins
matter today in this entry point of AI
or not?
>> I think they matter and the companies
I've invested in typically have
reasonably high margins. Um that said, I
think they matter. They're they're a
directional indicator of how much
product you've built on top of the
foundation models. They are not
absolutely important. I, you know, I
remember investing in a company many
years ago that had a 30% gross margin
and now it has a 70% gross margin. And
so gross margins go up over time. I
think one thing as an investor that I
guess you viscerally experience is that
plenty of companies that get critiqued
for having low gross margins end up
being super healthy businesses in the
long run. You know, Snowflake was one of
the big indictes on Snowflake in the
early days was that it had a low gross
margin. Obviously, it's a it's a very
good business. So, I think if you have a
real product that delivers a lot of
value and there's reasons why as you get
bigger the cost is going to go down and
in AI there's such an obvious reason
which is the cost of compute just keeps
coming down every year. So, the trend
line is very clear. I think you can
build a healthy business. And so I would
even go to the extreme and I haven't
invested in any of these companies, but
I would go to the extreme to say that
even some of these companies with 0%
gross margins, I can imagine how they're
going to work. Now, the companies I've
invested in typically have higher gross
margins um than that. And and I think
that's an indicative of the amount of
product that they've built. At the end
of the day, our job is to invest in
companies that become really successful,
not to be like super smart about
analyzing them. And so I think sometimes
the instinct to criticize a gross margin
can get in the way of money making. And
uh you mentioned Doug. I I I sort of the
thing I've learned from Doug or the
thing I admire most about Doug is like
the job is to make money at the end of
the day for LPs, for founders, for
everybody. We all, you know, that's the
business that we're in. And so I try to
keep that as the as the goal at the end
of the day.
>> I I have something called WWD,
which is what would Doug do, which is
like in a tough situation, I'm like hm
WWDD. Um, we
>> framework,
>> margins is one. Growth rates is another.
The companies are just growing so much
faster than we've ever seen before. We I
had him on on the show from GC. He said
trouble trouble double double. I say go
like, you know, come back when you got
something better. Brian Kim said
recently it caused a lot of Ferrari like
if 2 million in AR like in a 10 days
like come on. How do you feel about this
growth rate on steroids requirement from
VCs and how do you feel is triple triple
double double dead?
>> I think of it I think of it as the zero
to 100 club. So I think it's a variation
on this which is the best AI companies
right now are going zero to 100 million
of revenue very quickly and I don't
think you have to be at 100 million in
revenue to be clear but I think that as
an investor you want to believe the
company is going to be one of those
companies and I think companies that are
on that trajectory or have crossed that
trajectory are companies like Harvey and
Open Evidence and uh and I think and
Clay and Juicebox and I think these are
companies that are kind of on this
trajectory of growing really really
fast. Um, the reason why it's important
is because to your point on how there's
so much demand right now for AI, the
best companies, it is the best indicator
we have that you built something really
useful. People are, and we we've talked
about this actually a number of times in
our partner meetings at Sequoa. You
know, you sort of look back at the
internet, there weren't that many people
on the internet and so these companies
could only grow so fast. Right now,
everybody's on the internet and
everybody wants to buy AI. So, if you
have something really good, it's going
to get adopted really fast. And so I do
think to the point of playing the game
on the ground and adapting to what you
see in the market. The biggest thing
that we've seen in the market is that
these companies growing zero to 100 are
the companies that have smashing product
market fit. And so I'm happy to invest
in a company with 2 million that is
smashing product market fit. But I would
tell you is the companies with smashing
product market fit are growing faster
right now. And by the way, they don't
always have to grow faster. Like the
goal is to invest in something that in
20 years is this amazing public company
with billions of dollars of revenue. And
that is still the first order thing. But
I I I think you um you know don't fight
the tape. Like you can't ignore the
traction on the ground.
>> I always say I don't care how long you
take to get to a million in revenue, but
I care desperately about how long it
takes for you to go from one to 50.
>> Yeah. Yeah.
>> There's a lot there's a lot of data that
indicates that that is a very good
leading indicator for what it's worth.
The data I've looked at suggests that
that is a historically good algorithm.
>> You know, one of yours is UiPath and
he's a dear friend of mine, Daniel. And
I mean it took nine years to get to 550k
of ARR.
>> I'd wish I'd invested in him in the
first few years. I got to work on the
investment when it was later stage. But
I mean obviously amazing story and I
think one that should inspire people.
One thing I try to talk about with
founders also is like I want to inspire
founders that it can take a long time
because Silicon Valley sometimes has
this such a short-term time horizon. And
I look at Juicebox you know this company
started three years ago. The founders
the CEO is 25. CEO is 22 sorry he's now
25. The CEO is 22. who he had dropped,
you know, finished Harvard in three
years. The CTO dropped out of Dartmouth.
He was 19. They took them three years.
They were always focused on recruiting.
They had an initial music app in college
and they evolved that into the
recruiting market. And they spent three
years figuring out what the product
should be. And now, of course, it's
growing really fast and they're really
good founders. And one thing I've
learned and this incentivizes me to
invest in companies like this is people
like David and Aan, the Juicebox
founders who've sort of been through the
founder journey, they've been through
the pain, they understand how hard
product market fit is. I think in the
fullness of time, they are better
founders for it. And uh those those scar
tissue, even though they're really
painful, I do think they pay dividends
long term. And I think for founders who
are listening who are like in year one
and things are hard, you know, that's
that, you know, I that's painful and
there's nothing that I can say that's
going to make that less painful. But I
think there is like we would love to
invest in you to your as you figure it
out and and we're super patient and not
most there's this false narrative I
think that like all the good companies
they you know they raise the seed and
then they raise the A and then they
raise the B and it's all in 12 months
and that revenue that's not really how
most of these companies work. Clay spent
many years it's funny we've been talking
about Juicebox and Clay. Clay spent many
years in the wilderness figuring out
what their product was going to be.
Sequoa invested at the series A uh in I
think 2019. The company spent three or
four years in the wilderness really
figuring it out. I look at Kareem, I
think the man is like enlightened from
this experience like it's super painful
uh experience. Uh Verun ended up joining
as a later co-founder. Amazing
combination. So the company completely
changed from the series A and then I led
Sequoa's investment at you know a little
north of a billion um which we are
doubling down in the growth stage of the
company and obviously the company now
has uh has continued to rip and has done
has done really well. So, I think the
the default narrative of like, oh, I'm
going to start the company and then 12
months later I'm going to be successful.
At least in the case of two of the
investments that I'm most excited about,
that was definitely not what happened.
>> The reason you come on the show is cuz I
stalked the [ __ ] out of you. I spoke to
Verun and David from Juice. Verun from
Clay and David from Juicebox before both
said I told you I didn't have one person
not respond to my calls or messages
about you, which is like very very rare,
dude. Like that's testament to you. You
mentioned there like oh for founders who
you know it's hard and you know we don't
want to present this false picture of
being easy completely true but we are
seeing these very quick successive
rounds you know if we look at say a
ritlet or a profound or a do you worry
about them I remember Pat Grady once
saying to me that his biggest challenge
is that when he does a deal everyone
else wants to put in money at double or
triple the price and that really stuck
with me. Do you worry about these very
quick successive rounds?
>> I think we try to find the right balance
and uh to be to be honest, this is a
conversation I have with a lot of
founders, right? So, this is like a very
active conversation. We're all having
these conversations all the time. And
we're obviously in a market where
capital is very abundant and very
available. And so, I see the argument
for why people want to take the capital.
I think one lesson we've learned is more
capital does not make a company more
successful. Capital is a is fuel, but
capital does not create the engine. And
so, I think this is a tension. I think
this will always be attention and I
think this is definitely a tension for
companies right now where and we learned
this the hard way in 2021 getting over
capitalized has downsides. I think it
leads to the biggest downside in my
opinion is that it leads to this sort of
internal perception of like we're we're
winners. We're so successful. We're so
great. And um the only thing that makes
you a winner is having tremendous
product market fit and having customers
who love you. And um and so I think
that's attention. Some founders and I've
seen some founders do a great job of
this that I've worked with. they they
they really act like that the money is
not in the bank account and they really
behave diligently and the team size
doesn't grow too fast and all of this
stuff but I think that is the exception
not the rule and I think it's actually
the not the founders that are you should
be most worried about but it's the
engineer who joins the company the day
after the billion dollar fund raise with
very little revenue that dynamic is
tricky and I um I admire the founders
navigating it I don't think there's an
easy answer I wish there was I don't
think there's an easy yes no answer uh
but I think it's a tension we should be
talking about and as company builders
it's something that we need to uh we
really need to think about.
>> Speaking of Pat quite a lot, poor guy
it's like an advert for Pat. He taught
me something that was really interesting
which was two questions which are a
framework for amazing insight from
founders. And he said number one is like
uh what does everyone think they know
that actually they get wrong?
>> If we apply that to AI and what we see
today, what does everyone think they
know that they actually are getting
wrong? I guess I would say and this is a
really hard one lesson and it's
something I've learned from a lot of my
mentors in this industry because I think
one of the things I really try to do is
learn from people who've been doing this
for longer who are smarter who are more
thoughtful and uh one lesson that I've
learned in this business is that
anything multiplied by zero is zero and
I think that's one of the really tricky
things in investing which is just to say
that market volatility doesn't matter in
the long run if you have a great
business but if you overextend yourself
and then some crash happens and you go
bankrupt, you're bankrupt, right? Like
there's no way out of that. And so, and
I think that there's sort of this sense,
I heard this phrase recently, momentum
has its own reality. And I think there's
this sense of everyone is living in this
like reality distortion field of
momentum. And um I think of it almost
like this boomer, you know, the
slingshot. You like pull the slingshot
back and then, you know, you release the
the the thing and then it sort of it has
its own momentum after that. And and
that's sort of a fundamental law of
physics. Things in motion stay in
motion. and things at rest stay at rest.
And so I think the thing that kind of
people think so confident in is this
like reality distortion field that comes
from momentum. And uh when that reality
distortion field goes away uh you need
to survive that. And I think one thing
that I hope that I can be to my founders
is a partner and I I you know they'll
listen to me 10% of the time and that's
fine but a partner in um you know making
sure that we survive those moments and
navigate those moments well and position
ourselves well against that. And I think
that the most prudent of investors or
like the most sober of investors can
actually be really helpful. Your job as
a founder is to be maximally aggressive
and you should do that. And then the the
f the investor should hopefully be
giving some advice, helping think these
things through, giving some
perspectives, uh you know, understanding
the a broader time horizon perspective
and a broader data set of companies and
then you sort of navigate to the right
uh to the right end destination. So I
think um I don't think people are
thinking about this sort of concept of
anything multiplied by zero is is zero
because the time horizon is so
compressed into this shorter period of
time. Um and that's just something that
I think a lot about.
>> The final one that Pat taught me and
then we'll move to talent which I do
want to touch on for a quick fire. The
other one yeah
>> a very very very good guy.
>> What is no one thinking about that
everyone should be thinking about? So
like for me one I think it's astonishing
is like no one is thinking that if you
fogger our engineers in terms of the
capital that you are stuffing down any
of the multi-billion dollar you may not
get an equivalent level of productivity
as when they didn't have multiple
billions of dollars. Give a nerd
billions of dollars. Nerd buys five cars
in a boat. Nerd not so productive. Like
sorry to be so blunt and direct but it's
the same with companies. I think that
companies underestimate 23 year olds and
24 year olds. I think this is something
that people really really underestimate
and I think this is more true than ever
right now in AI. Like I, you know, I
recently like I I meet probably 200 or
300 young recent college grads every
year. And the reason I meet them is I
want to recruit them into my companies.
A lot of them are founders. This is the
population that I learn the most from
because I know that my blind spot is
going to be that somebody started using
Chad GBT when they were 18 and I didn't.
And and so they're going to have a
different perspective. And that's the
perspective I need most in my life. Any
case, I introduced some of these people
to companies and the company's like,
well, what's their skill set? Like why
should I hire them? And um you know, I
guess I think this is something that
people are not thinking enough about in
AI right now, which is Chad's been
around for 5 years. Nobody has more than
five years of experience in AI. The
playing field is super level. And I
think in a changing and dynamic market
environment, uh dynamism and slope and
ability to learn are more valuable than
ever. And so the thing that inspires me
and the thing I spend a lot of time
thinking about is, you know, in a
Juicebox for example, how can we get the
very best 23 year olds in the world
working at this company? And that's a
big part of my job. And I spent a huge
amount of time on that, a huge amount of
time. I'm there one day a week right now
at Juicebox just working on this. So how
do we get the best people in the world
inside of these companies? And I think
maybe 10 years ago in the era of
software, um, you know, a senior
software engineer, a staff software
engineer, they had more experience than
a than an L3. and uh you know
architecture is hard, writing code is
hard and they they were much better and
so maybe it made sense there was this
old playbook for startups of like oh you
hire the staff software engineer who
kind of knows what they're doing and you
don't have to train people I think that
the new playbook for these AI startups
is actually going to be much more about
hire the AI generalist this 23 24
25year-old who's really native in AI
really passionate about it and I think
those are the the sort of the front
lines that are going to make great
companies
>> totally agree and understand that do you
worry about emotional maturity a little
bit and I don't mean that patronize
urprisingly, but Jesus, I mean, I'm 29
now, but when I was 22, 23, I I I did
some things that I would not do now.
>> I think that hiring always has
trade-offs. I think one thing I believe
more generally speaking, because it's
worth saying, is um I really believe in
trade-offs. I think everybody wants the
free lunch thing. Like, when you don't
know the trade you're making, then then
the negative is hiding from you. There's
no such thing as a trade without
negatives. There's no such thing as a
decision where it's all positive and no
negatives. So, I I always talk and I
talk about this a lot at actually. It's
like hidden risk versus visible risk.
And so when you hire a 23-year-old,
there's a very visible risk. They're
emotionally immature. They don't have
any work experience. It's very obvious
the negatives that you're taking. When
you hire someone who's more experienced,
it's like like less obvious the risk
that you're that the risk that you're
taking. It seems to be lower risk. And
every decision is a risk, right? And so
maybe the risk that you're taking is
that they're not going to work as hard.
Maybe the risk that you're taking is
that they're less AI native. There, you
know, there's always a risk. And I think
people have this tendency to favor the
hidden risk. By the way, price is a
hidden risk. you don't perceive it as a
risk, but it is a risk. Um, and so
people prefer hidden risk over visible
risk. And I prefer visible risk. I want
to know exactly what risk I'm taking.
And then, by the way, I'm a huge risk
taker. I started investing eight years
ago, right? Like I love risk. So, I
think it's important to calibrate that
like I love risk-taking, but I want to
take visible risks that I know the risk
I'm taking. And I think herd behavior
and consensus mentality is about hidden
risks. The risk is just beneath the
surface and you're not paying attention
to it. Um, whereas I want to take risks
that I can see. And I think there's a
lot of areas. The point I'm trying to
make is um in the hiring dynamic when
you hire a 23-year-old, it's like super
obvious why you shouldn't hire them. And
yet sometimes that's okay because the
reason you should hire them makes up for
that more.
>> Completely agree from the employer side.
On the flip side, um when you think
about like advice to them, if if you
were advising your younger sibling on
choosing their first job, I I saw on
LinkedIn you said follow the smartest
people a year ahead of you. that moniker
of advice may not be relevant anymore.
What advice would you give to them?
>> Well, this is like the biggest learning
because I've met with two or 30 hundred
young people a year. I have a very big
data set and I think I've probably spent
more time than anybody at Sequoa on this
specific, you know, thing. And my
biggest lesson is that the way that
young people choose their career is this
what I call the medic algorithm. And the
medic algorithm is yeah, what did the
people one year ahead of me in school
that I thought were the best? What did
they go do? And it's a recursive
algorithm, right? So it's like what did
the people a year ahead of me do? But
those people chose based on the people a
year ahead of them did and those people
chose based on the year ahead of them
did. Now, one reaction to that would be
negative of like, oh, that's so mimedic.
They should think for themselves. I
actually don't have that perspective.
I'm fine with it. I think it's like a
reasonably good algorithm. When I
graduated from college, Palanteer was
the hottest company to go work for. All
the really smart people went to go to
work for Palunteer. Going to work for
Palier would have been a great life
decision at that stage. Uh before that,
you know, in the early 2010s, Google and
the big tech companies were the hot
place to go work. And I think you know
those companies were all 10 x's over the
over the 2010s. Some of them even I
think 25x's. So the the it was a good
decision to go work at Google in 2010.
And so I don't think the mimemetic
algorithm is inherently broken and I
respect it and and I think that people
to your point of maturity people are
going to go through a maturity curve.
They're not going to use this algorithm
when they're 30s. They're going to
evolve. They're going to change. And so
I I sort of have this respect for it.
That said, I do think that recursive
algorithms break down in the face of
dramatic new data. And the dramatic new
data is the AI cataclysm. AI has totally
changed how the world is going to work
and it should change your forecasts on
the future. And so the recursive
algorithm of like what did the guy
you're above me and the person you're
above me do is actually breaking because
those people didn't have this
information. They didn't know that AI
was going to change the world. They
didn't understand uh genai. And so I
think the advice that I try to give
young people is just factor that into
your algorithm. Like you do you. It's
like join the company that you want to
join. Go to the place that's going to
make you the happiest. But, you know,
factor that in. And then it's worth at
least giving a shout out to this group
of people that I call builders in this
uh substack post that I did, which is
builders are people, most people, 90
plus% of people, the question they're
asking when they're choosing a job is
like, what can I get from this job? What
is it going to enable me to do? Who am I
going to surround myself with? How am I
going to become better? It's very, it's
a very like what do I get out of it? I
think there's like a 10% group of
people. Maybe it's 5%, maybe it's 1%. I
don't know exactly what the percentage
is but there's this group of people that
they're asking the question what can I
contribute and by the way if you
contribute a lot you generally get to
extract a lot and so I think
contribution this is again a beautiful
thing about capitalism is like when you
contribute a lot I do think that you get
rewarded for that and so those are the
people driving Silicon Valley like those
are when you go into a company and
you're like why is this company
succeeding it's those type of people and
those are the type of people who like
they go from one great startup to
another great startup to another great
startup and so anyways that distinction
between these two groups of people both
valid
no problem with either of them. Like you
got to respect career is a very personal
decision. Um and so anyways depending on
what you're what you're trying to solve
for what what's going to grow my career
option one where can I contribute the
most and therefore extract the most
option two. I think there's a bunch of
great opportunities ahead of you and um
just factor in the AI variable.
>> I think one thing that just frustrates
me on this topic is like the
momemeticism that continues despite
market changes in the UK. And what do I
mean by that? Gold Goldman Sachs
investment banking consulting is still
whatever people tell you if you go and
speak at universities which I do once or
twice a week now.
>> Wow.
>> Everyone still wants to be an investment
banker. And so my when you were talking
I was thinking what does it take to
break the mimetic chain and maybe it's
AI and the proliferation of AI and
popular culture and media and but I
think it's changing. I agree with you
like it's changing too slowly and that's
why I'm having these conversations. I'm
I'm trying to help and I'm sure you are
as well in in these talks that you're
doing. I think that um one positive that
I would say is I've seen a material
change in the last 12 months which is
sort of interesting because it's not
like I'm not saying the last 24 months.
I'm not saying the last 36 months like
it took two years after Chad GBT for
this to really start flowing through.
But I would say it is and by the way a
lot of the people I'm talking to are
currently investment bankers who want to
get into AI companies. So it is sort of
funny that way. I think there's more and
more of these high performing people
want to be inside of AI companies. And
that's why I think it's sort of a it is
a two-way match. Like these companies
need these people more than ever, but I
think these young people can benefit
more than ever from being an AI company.
And again, maybe to make the value prop
clear for like the young person, right?
Like the value prop is, hey, maybe 10
years ago if you joined a startup and
and people didn't join startups that
often 10 years ago, maybe 10 years ago
if you joined a startup, like there's
this whole experience curve. You're the
junior engineer. there's a lot of people
who are smarter than you and you're
going to have to learn and it's like
going to take five or 10 years to become
a really meaningful contributor.
That's not really true anymore, right?
You're sort of entering at like much
more parody with everybody else. And so
I think there's good reason why people
are making this change.
>> Dude, I'm I'm throwing in a curveball
here, but I I was told that you're the
man who does defense at Sequoa
and I you know you know I say this with
love, but I'm going in hard ball on this
one. How would you respond to Sequoia
were asleep at the wheel when it came to
defense not being in Helsing and Andre
the two clear market leaders in the
category?
>> I would say and I think this ties into
our conversation so far that AI that
defense is the next AI and like that's
how I started getting involved in AI. I
think that defense is if the transformer
moment was sort of the starting gun in
AI uh I think that uh the Chad GBT
moment hasn't happened yet. So I do
think look there's no way around it.
Skoa was late to defense. Um, but I
think Sequoa is working really hard to
catch up and that's part of business.
You don't always get things right, but
you keep trying and I think we have that
ethos and we have that humility.
>> Why do you think defense is the next AI?
Sorry. So I think that you know it's
funny because I started investing as we
were talking about a year after the
transformer paper in in 2018 and um you
know I think that it sort of defense
reminds me in some ways of like a few
years after the transformer paper which
is to say people who are really paying
attention understand that that defense
is is going to change and the
transformer moment was the was the
Ukraine war. It was a very odd, you
know, before that you had to be a
visionary and and to Palmer's credit and
and Peter Till's credit and people like
this like they were visionaries before
the transform paper you're a visionary
and you know Ilia Andre Karpathy these
people are visionaries after the
transformer paper you're an early
adopter right and I think our job as
investors is to be early adopters uh for
the most part especially in the growth
business to be early adopters and so you
see that you see the change that
happened in Ukraine and um and I think
it was very obvious that like you know
warfare you see these pictures of these
tanks you know and these like long
chains of tanks from Russia and it's
like wow like defense technology is 50
years old and technology has moved so
much in 50 years and yet like the way
that we do war just hasn't changed and
that's because um you know we've been in
this period of golden era for the for
the world of of dramatic peace and
prosperity and all this stuff and so
anyways I think that the transform paper
moment was the um was the Ukraine war
and then I think the chbt moment hasn't
happened yet and so I think that defense
is actually you know is underhyped in
some ways or like underestimated in some
ways and that's why I started getting
interested in defense uh two years ago.
>> When you look forward to the world of
AI, you've assumed that everyone will be
improved with AI, will use it hundreds
of times a day and it'll be a part of
everything that we do and think and and
say in many respects.
Taking that view on defense then assumes
this continuing conflict
increases not even decreases from where
we are today. that would go against
human cycles. There are periods of
intense conflict, periods of not.
>> But suggesting that defense I would
suggest that that is the case. How do
you how do you feel about that?
>> Yeah. So I think by the way I'll share a
little bit of how I got interested in
defense and and um you and I know each
other now. So like before I got in AI, I
was reading all this stuff and I'm
trying to learn from people and I I
think my sort of investing style is like
you spend two years learning about the
thing and then you kind of start
investing in the thing. And so I I sort
of take my time to sort of build a
foundation. And my foundation defense is
like reading Napoleon and Church Hill
and like all of the you know the history
of war, you know, the history of wars,
the history of defense, like
geopolitics. Really like getting
educated. And I probably spent two years
really educating myself and meeting
founders and you learn a lot from
founders on this space before I got
involved. And the thing that I learned
and I think the thing that a lot of
people who are deeper in the space than
I am already understand is uh deterrence
is is the is the first thing. um you
know uh you you only go to war because
you have to. The whole point of defense
is to prevent wars and geopolitics is a
real thing and there's like real
competition between nation states and
that's always that that will continue
and so as the world gets reshaped and we
are living through a reshaping of the
world order I think that's something
that a lot of people have seen have
written about there's a lot of variables
about that that we can unpack. Uh I
think Ray Dalio's principles of the
changing world order is a really good
book on this topic. So the world order
is is sort of fundamentally changing and
that leads to this interesting
opportunity where um we have to sort of
catch up. There's a 50 years of catchup
that has to happen. That's how I see the
current defense moment. And this is why
I say we're you know two years after the
transform paper we're not even at the
chbd moment yet is we're like 1% there
on catching up like we're actually so so
early in this defense cycle because you
know now we have a few dozen companies
maybe a hundred companies that have sort
of new innovations. they're not
integrated into the force structure
meaningfully yet. There's so much more
that has to happen and I think that we
have our, you know, the clear market
leader now in the United States with
Anderil and I think there's more
companies uh internationally that are
going to do really well as well. And but
I think that we're like, you know, we've
we've sort of crossed the chasm of like
this is a thing that matters. We've
crossed the chasm of the government
knows this matters. We've crossed the
chasm of you talk to people in
Washington DC, they now understand
Palanteer and Andrew. They know those
businesses. But in terms of the force
structure changing, in terms of the way
that we actually protect ourselves
changing, in terms of US deterrence
changing, I don't think it's changed
that meaningfully. And I think after the
chat GBT moment, what's going to happen
is that, you know, pre-hat GBT, if you
were paying attention, you noticed after
CHGBT, everyone knew this was important.
Every American, every single person. And
I do think we're going to get to a place
in defense where everybody knows that
this is really, really important and
that we need these companies to succeed.
>> Do you not worry about the concentration
of buyers in that world? Again, when you
compare it to defense, you have every
business in the world or every consumer
in the world. What I really don't like
with defense is actually what Brian
Singerman told me about what makes
Andreal so special, which is the
complimentary skill set of the founding
team. You know, whether it be GTM into
like defense and government, whether it
be product, whether it be intense ops
with um you know, their CEO Brian Shy.
Um, and I I just don't like the
concentration of buyers and the selling
to governments and the lack of incentive
for them. Do you not worry about that?
>> I think I I definitely think about that.
And I guess my framework and this is the
thesis that I've been investing behind
now for the last couple years is my
framework is there are going to be fewer
companies that succeed at defense for
this reason. Defense is consolidated for
good reason. There's a single customer
and so you need to serve that customer
really well. And I think that what makes
a great defense company is to be a
national champion. Fundamentally what
makes a great defense company is to
understand the customer and to be able
to serve the customer and to be able to
drive what is fundamentally a nationwide
transformation that needs to happen.
We're going, you know, people talk about
digital transformation. This is a
digital transformation for the defense
space. That's what it is. It's funny
that it's a very old phrase, right? But
defense actually hasn't gone through it
yet. Um, you know, it reminds me of whiz
where like whiz really benefited from
the rise of cloud. And you would have
said what do you mean? Like cloud was
already a thing in 2017. But of course,
these things take time and so I think
we're finally going through the digital
transformation um for defense and I
think there's going to be a few
concentrated winners in each country and
we'll have a venture-funded equity
funded sort of R&D companies that come
out and they'll get consolidated into
the national champions and in my view
Andrew is clearly the national champion
in the US and uh credit to that team
just really phenomenal company
phenomenal visionaries. So the other two
uh national champions that I've invested
in, one is a company called Kella, which
we think is going to be a national
champion. It's based in Israel. The
thesis is that Israel has the best
people in the world for this and they
can help defend the United States and
they can help defend Europe. And the
second company in Europe is a company
called Stark, which Sequoa has now
invested in over two rounds. Uh and that
we believe can be the European national
champion. And both companies have done
have done really well, but they're
earlier.
>> I spoke to Alon at Kala, I think. No,
Alon and Hamutal. phenomenal people.
Hamut is the uh GM for Palanteer Israel.
To our conversation on talent, they've
like, you know, they've become a massive
talent consolidator in Israel. I think
the two big talent consolidators right
now in Israel are Kella and Daycart.
>> I get in trouble for this. I don't think
defense is a category. And you're like,
what a category is enough that can
support an ecosystem with its breadth
and depth. I don't think defense is. I
think there is your Andreas and maybe
two to three more in the US. And I think
there's you know Keller and Helsing and
stock but I don't think it's like SAS
where there is 30 to 50 fintech where
there is 20 to 30. Do you agree with me?
>> I do agree with you. Um I mean my
objective I've probably invested in a
dozen AI companies in my career. I hope
to invest in 20 more. Um my objective is
not to invest in 20 more defense
companies throughout my career. I think
it's going to be a very small handful of
companies. Maybe we'll do one every
couple years. Uh but it's that there's
you have to go after the right
opportunities. You have to build in the
right way. Uh and the winners are are
going to keep scaling.
>> I think so many of the dollars going
into it today will be lost. I see so
many like you know McKenzie consultants
who are now VCs being like oh my cost
per kill and I'm like you have no
freaking idea what you're talking about.
>> Yeah, we don't think that way. Um I
think we think in terms of defending the
country, in terms of having people feel
safe and in terms of deterrence. So I
agree with you. I um I don't I don't
love that type of language.
>> Dude, I want to do a quick fire around.
So, I say a short statement, you give me
your immediate thoughts. Does that sound
okay?
>> Perfect.
>> So, what have you changed your mind on
in the last 12 months?
>> We talked about this a bit last time,
but I can close the loop for people,
which is uh I finally decided to learn
how to drive and I got my driver's
license in January, which I think is
funny because it's kind of like
capitulating right before the trade is
in the money. Like I was waiting for
self-driving cars for all these years
and then I finally got a license and now
of course self-driving cars are on the
streets every day. So
>> come on. We were we were equal on one
thing which did we go why did we do
that?
>> I encourage you Harry go out and learn.
It was it's a good exper Well told me
that I had to because I was having a
baby and I think that was pretty
reasonable uh to help my wife and uh
drive around the drive around the baby.
>> H tell me how has being a father changed
you? You know, people, a lot of people
say this and it's true. It just focuses
your priority. It's so important. Um, I
think it uh it makes you less abstract.
Like you can think about things in
abstract. Your child is not abstract.
Like your child has needs and they need
them right now. And so I think there's
something that uh in terms of just like
bringing you into the present is really
valuable about that.
>> What would be your biggest advice to me
on partner selection?
>> Um so many people told me you had a
great wonderful marriage and that they
wish to emulate it and I was like wow
[ __ ] Okay. Huh. Good.
>> Very, very kind. I mean, I would say I
guess pick right. Um I mean, my wife is
smarter than me and better than me and
always. Um
>> if your wife is smarter than you, David,
I'm worried for the conversations you
have at dinner.
>> I'm excited for you to meet her. No, I
think that look, one thing that I've
really uh shaped me over the last few
years, especially like after getting
married and having a kid, is you know,
people talk about the importance of
shared values. And I think there's like
every year that becomes more clear to me
that that is true. And I think I met my
wife very young. I didn't understand
that fully when we first met and I'm
very grateful for that every day.
>> Tell me, dude, what's your biggest miss
and what did you not see that you should
have seen with the benefit of hindsight?
>> One big financial miss is Data Dog and I
worked on this before joining Sequoia,
but I remember, you know, Data Dog was
this amazing company. The numbers were
incredible. It was profitable. Like it
was one of these businesses where you
just like your mouth waters looking at
the financials of that business. And um
and I remember we lost uh and and and we
lost to Dragon Ear. And I never
confirmed this with Dragon Ear, but the
story behind it really stuck with me,
which was that Dragon Ear had this list
of 20 companies and they only worked on
those 20 companies. And they had been
spending years and years and years
according data dog. And like this was
their number one priority and they knew
it was number one priority for a few
years. And this was probably six years
ago now that this happened, but it's a
principle that has really shaped how I
pursue new investments, which is um if
it's not I want to really focus my time
and I've actually adapted this to if
it's not one of the top five
opportunities, that's where I really
want to be spending 80% of my time and
then I want to spend the next 20% of my
time on the next 15 and then after that
just like really trying to focus your
time. And so that actually shaped who I
became as an investor and I I learned a
lot from that.
>> Uh penultimate one, dude. What one
technology, what one technology do you
think is wildly undervalued and why?
>> I think that people are underestimating
voice as an interface for AI. Um, and
you know, we just I think today uh right
before this uh podcast, we announced our
investment in a company called Sesame,
which is an AI voice company, an AI
conversation company. I got to work on
this with Roloff. Uh the founder is the
CEO of OC, former CEO of Oculus and it's
Roloff, Mark Andre and and Santo who's
the founder of Spark on the board. So
it's a really good company. Um and the
company you might have seen their launch
a few months ago. They launched this pro
this AI voice product that you can talk
to. Got a million users in a few weeks,
5 million minutes. Like just tremendous
product market fit. I always had this
view that you know we're not always
going to be like staring at our phones.
Uh and that's not like the terminal
interface to technology and to AI. And I
think that, you know, I always had this
view, but you tried all the AI voice
products and they all kind of sucked.
Like they were not good. Um, they were
boring to talk to. They didn't remember
anything about you. You couldn't
interrupt them. You couldn't really have
a dynamic conversation. Your brain just
said like, "This is a robot." And, uh,
when Sesame came out, it was just a
radically better experience. Within 10
minutes of seeing this technology, I
knew that we were going to invest and
and we ultimately did. Um, and so I
think the idea that we're going to be
sitting here in 10 years talking to our
AI, having a relationship with our AI, I
think that's very likely and I think
it's it's a little bit sci-fi right now.
Uh, and I think it's going to get less
so in the coming years.
>> Well, listen, Sam Alman's opened the
door to erotica. So, I mean, you you
never know what's coming. Um, we're not
going to end on that cuz that would be a
weird ending to end on.
Uh, but the thing I want to end on, I
like positivity. I [ __ ] hate the
doomsday scenarios. What are you most
excited for when you look forward 10
years? What is like this is what gets me
out of bed?
>> I think this is a good place to end the
conversation because my answer is AI.
It's sort of funny because we've been
talking this whole time about the ups
and downs of AI and the risks and the
challenges and all the complexity, but
at the end of the day, AI is the most
important story of our lifetime. It's
going to completely transform the world.
It's going to be, you know, is this
event that is sort of a once in human
history kind of event. And I think it's
going to be a really, really epic ride
to be on. And I'm excited to be on it
with you and with everybody else because
I think we're all gonna it's going to
change our lives a lot.
>> Do you know what's going to happen,
David? I'm going to come to the valley
and if it's okay with you, you're going
to take me on a drive and
>> Okay, great.
>> We're going to get a We're going to get
a photo for Ro of both of us in a car
driving. I like it. Beautiful.
>> Dude, you're a start. Thank you so much
for joining me, man.
>> Thanks for having me, Ari. This was very
fun.
Loading video analysis...