The Truth About The AI Bubble
By Y Combinator
Summary
Topics Covered
- Anthropic Overtakes OpenAI in YC
- Model Orchestration Beats Loyalty
- AI Bubble Enables Startups
- Space Solves Data Center Crunch
- AI Economy Stabilizes
Full Transcript
I think perhaps the thing that most surprised me is the extent to which I feel like the AI economy stabilized. We
have like the model layer companies and the application layer companies and the infrastructure layer companies. Seems
like everyone is going to make a a lot of money and there's kind of like a relative playbook for how to build an AI native company on top of the models.
Many episodes ago, we talked about how it was felt easier than ever to pivot and find a startup idea because if you could just survive, if you could just wait a few months, there was likely going to be some like big announcement
that would completely make a new set of ideas possible. And so like finding
ideas possible. And so like finding ideas is sort of returning to sort of normal levels of difficulty.
Welcome back to another episode of the light cone. Today we're talking about
light cone. Today we're talking about the most surprising things that we saw this year in 2025.
Diana, you found a pretty crazy one.
It's sort of a changing of the guard almost in who is the preferred LLM at YC during the YC batch.
>> Yes. In fact, we just wrapped up the winter 26 selection cycle for companies.
And one of the questions we asked to all the founders that apply to YC is what is your tech stack and model of choice? And
one of the shocking things is that for the longest time, OpenAI was the clear winner for all of last year, last couple of batches. Though that number has been
of batches. Though that number has been coming down and shockingly in this batch, the number one API is actually
anthropic came out a bit more than OpenAI, which who would have thought. I
think when we started this podcast series back then, open eye was like 90 plus%.
And now anthropic, who would have thought?
>> Yeah. And you know, they'd been hovering around like 20 25% for most of like 20 2024 and early 2025 and then only even
in the last 3 to 6 months did this sort of changing of the guard actually happen.
>> They had this uh hockey stick with the with the growth are over 52%. Why do you think that is?
>> I think there's a couple of things in terms of the text selection. I think as we've seen this year, there's been a lot of wins in terms of vibe coding tools
that are getting built out out there and coding agents. There's so many
coding agents. There's so many categories that this ended up being a bigger problem space that actually is creating a lot of value. And it turns out the model that performs the best at
it is actually uh the models from Antropic. And I think that's not by
Antropic. And I think that's not by accident. I think from the hearing the
accident. I think from the hearing the conversation we had with Tom Brown not too long ago, he came and spoke is that was one of their internal evals. They on
purpose made them their northstar and you can see it in the model taste as a result of what's the best choice of model for a lot of founders building products is anthropic.
>> The vast majority of the use cases people are using it for though is not coding. So I wonder if there's like a
coding. So I wonder if there's like a bleedthrough effect where people are using Claude for their personal coding and then as a result they're more likely to choose it for their application even if their application is not doing coding at all
>> because you'd be very um familiar with like the personality of Claude Opus or whatever they're choosing.
>> Yeah.
>> Sonnet I suppose.
>> How about Gemini? How's Gemini doing in those rankings?
>> Gemini is also pretty much has been climbing up pretty pretty high. I think
last year was probably singledigit percent or even like two 3% and uh now for winter 26 is about 23%.
And uh we personally been using also a lot of Gemini 3.0 and we've been impressed with the with the quality of it. I think it's really really working.
it. I think it's really really working.
>> I mean they have all different personalities, don't they?
>> They got too.
>> Yeah. It's uh it's kind of the classic where OpenAI sort of has the uh black cat energy and almost like uh Anthropic is kind of more the happy golucky a bit
more very helpful golden retriever. At
least that's what I feel when I talk to them.
>> And how about Gemini?
>> It's kind of like in between.
>> Har you prefer Gemini actually.
>> Yeah, I switched to Gemini this year as my just go-to model. I think even before 2.5 Pro came out and just seemed better at reasoning. For me, it was just like
at reasoning. For me, it was just like the increasingly I replaced my Google searches with Gemini and I just sort of trusted that Google's I think like the groundings API and its ability to
actually like use the Google index to give you like real time information correctly. I just found it was better
correctly. I just found it was better than I personally I found it was better than all the other tools for that and it was better than Plexity on it too. Like
Plexity would be fast but not always accurate and Gemini was not quite as fast as perplexity but was always pretty accurate if I asked it about something that happened today.
>> Even if you use Gemini as the reasoning engine in Perplexity.
>> I have not done that.
>> Interesting. Yeah. So it's hard to know like how much of it is the tooling and how much of it is like the base LLM.
>> That's fair.
>> Yeah. I mean what are your guys' tools of choice? I haven't switched off of
of choice? I haven't switched off of Chat GPT. I mean I find the memory very
Chat GPT. I mean I find the memory very sticky. It knows me. It knows my
sticky. It knows me. It knows my personality. it and knows the things
personality. it and knows the things that I think about. And so I'll use Perplexity for fast web searches or things that um you know I know was like a research task cuz I think Chetchup PT
is still like a little bit of a step behind for searching the web. I don't
know. I think memory is turning into um an actual moat for like that consumer experience. And I don't expect Gemini to
experience. And I don't expect Gemini to ever have the personality that I would expect from Chat GPT. It just feels like a different like entity, you know? The
thing I'm still surprised about is why there just aren't more um consumer apps around like all the various things we do. Like if I think back, one of the big
do. Like if I think back, one of the big changes for me this year is just the amount of prompting and context engineering I do for like my life. Like
I we we bought a house recently and like the whole thing like I just had like a really long running chat GPT conversation stuffing it full of context of like every inspection report or like
wanting it to be like level the playing field between me and like the realtor to understand kind of all the dynamics and things that are going on and it just feels like there should be an app for that.
>> But simultaneously I'm sure you took the uh PDFs and just like dropped them into Gemini and said like well summarize and tell me what's important for me. I guess
I worry about I worried about I still don't trust the models enough to be accurate without lots of prompting and it's a high value transaction. So you
don't want to like get incorrect data out of it. So I still feel like you need to put in the work and it feels like there should still be apps that just do all the work for you.
>> Did you see Karpathy release like sort of a LLM arena of a sort which I mean I do by like hand right now using tabs.
It's like you have uh Claude open, you have Gemini open, you have Chetchip open and you give it the same task and then you take the output from each and then I usually go to Claude at that point and I'm like all right Claude this is what
the other ones said what do you think and check each other's work think that that particular behavior at the consumer that level that we're doing startups are doing as well they are actually arbitrageing a lot of the models I had
some conversations with a number of founders where before they might have been loyalist to let's say open AAI I models or entropic and I just had some conversations recently with them and
these are founders that are running larger companies like series B level type of companies with AI they're actually abstracting all that away and
building this orchestration layer where perhaps as each new model release comes out they can swap them in and out or they can use specific models that are
better at certain things for just that.
For example, I heard from the startup they use Gemini 3 to do the context engineering which they actually then fed into OpenAI to execute it and they keep
swapping it as new models come up and the winner for each category or type of agent work is different and ultimately they can do this because it it is all
grounded based on the evals and the evals are all proprietary to them because they they're a vertical AI agent and they just work in a very regulated industry and they have this data set that just works the best for them. I
think this is the new normal right now where people are expecting, yeah, the it's cool that the model companies, they're spending all this money and making intelligence faster and better
and we can all benefit. Let's just do the best. It's almost like the era of um
the best. It's almost like the era of um Intel and AMD with new architecture would come up. People could just swap them right?
>> Yeah. It feels at the highest level that angst around where's the value going to acrew? Is it going to go to the model
acrew? Is it going to go to the model companies or like the application layer?
are either startups feels like that es and flows in either direction a little bit throughout the year to me like I feel there are moments where like claude code amazing launch and it was like oh okay like the model companies are actually going to play at the
application layer but then to me at least is all vibes based like Gemini surge especially over the last few months just feels like it returns us to a world of where exactly that like the models are all essentially commoditizing
each other and it's just like the application layer and the startups are going to are set up to have another fantastic year if that continues. I'm
curious what you think Jared with some a lot of um perhaps the negative comments on Twitter around is this a bit of a bubble AI bubble.
>> Yeah. When I talk to undergrads this is like a common question that I get is like oh like I heard it's a big AI bubble because like there's all this like crazy roundtpping going between
Nvidia and OpenAI and like is it great is is it all fake? Yeah.
>> No, this is fantastic. Right. Like
people look at the telecom bubble, it's like there's just, you know, billions of dollars, like tens of billions, hundreds of billions just like sort of sitting in a bunch of telecom back in like the, you know, '9s. Actually, that's why YouTube
know, '9s. Actually, that's why YouTube was able to exist, right? Like if you just have a whole bunch of extra bandwidth that isn't being used and is relatively cheap, the cost is low enough for like something like YouTube to
exist. Like if there wasn't a glut of
exist. Like if there wasn't a glut of telecom, then like maybe YouTube would have happened. It just would have
have happened. It just would have happened later. And then that isn't that
happened later. And then that isn't that like sort of what we're talking about here? Like how do we we have to
here? Like how do we we have to accelerate, right? We have the age of
accelerate, right? We have the age of intelligence. The rocks can talk, they
intelligence. The rocks can talk, they can think, and they can do work. And you
just have to zap them more. And you get like smarter and smarter stuff. At this
point, I think the argument to college students is actually like because there will be a glut there is an opportunity for you. And if there was not a gluten
for you. And if there was not a gluten there wouldn't be as much competition, the prices would be higher, the margins lower in the stack would be higher, right? And then, you know, what's one of
right? And then, you know, what's one of the big stories this year? Like, Nvidia
suddenly is on the outs. Like, I think their stock is today is like around 170s or something. You know, I think I'm
or something. You know, I think I'm still at long-term buy and hold, honestly. But for the moment, people are
honestly. But for the moment, people are like, "Oh, well, Gemini is so good and all the, you know, nobody seems to be Nvidia only now, and everyone's buying AMD and everyone's, you know, and TPUs
are working." So, you know, at the
are working." So, you know, at the moment it looks like there's, you know, what does that mean? like there's
competition and uh it means that there will be more compute not less and then that means that probably a little bit better things for all of the big LLM
companies like sort of the you know the AI labs uh they get a little bit of power but you know they too are in competition with one another so then what does that mean well then it's you
know go up another level in the stack right like as long as there are a great many AI labs that are in uh deep competition with one another then uh that's even better for that college
student who's about to start a company at the application level.
>> Yeah, I think that's exactly right. It's
like people are asking this question like is it a bubble? That's maybe a question that's really relevant if you're like the equivalent of like Comcast. Like if you're Nvidia that's a
Comcast. Like if you're Nvidia that's a very relevant question like oh are people overbuilding GPU capacity? But
like the college students they're not Comcast they're actually like YouTube.
If you're doing a startup in in your dorm room, it's like the AI equivalent of like YouTube and like kind of doesn't really matter that much. Maybe Nvidia's
stock will go down next year. I don't
know. But like even if it does, that doesn't actually mean that it's like a bad time to be working on an AI startup.
>> Yeah. To what Zach said on a podcast survey this year, I think, right? It's
like Meta may end up overinvesting like a significant amount in like the capex and infrastructure, but like they essentially have to the big companies have to do it because they can't just
like sit on the sidelines. And in the case like demand falls off a cliff for some reason, it's their capex, not the startup's capex. And there's still going
startup's capex. And there's still going to be tons of infrastructure and ideas to still continue building.
>> There was this book written by this economist called Carlo Perez who studied a lot of uh tech trends and it studies a lot of um technology revolutions and it
summarizes that there's really two phases. There's the phase of uh
phases. There's the phase of uh installation which is where a lot of the very heavy capex investment come in and then there's the deployment phase where
really ripples it where it rips and then everything explodes in terms of abundance and during the initial phase of installation is where it feels like a bubble. There's a bit of a frenzy
bubble. There's a bit of a frenzy because it starts first with a there's this new technology that's amazing which happened with the chatb moment in 2023.
Everyone got super excited about the tech and then everyone got super hyped and got into investing into a lot of the infrastructure with buying a lot of GPUs and all the giant gigawatt data center
built out and then people's like but what is the demand? What are going to be all the applications to be built out? I
think right now we're in that transition which is actually really good news for startup founders because they are not involved into the building the data centers but they're going to build the
next generation of applications in the deployment phase when it really proliferates and what happened just going back to the analogy with with the era of uh the internet before the 2000
there was a lot of heavy capex investment into the telos right those were giant projects that college students wouldn't be involved, but they were very heavily invested and in some
cases were overinvested. I mean, this is the whole thing with dark fiber and some pipes that are not used. And that's
fine. The internet ended up being still a giant economic driver. And what that means is startups like the future Facebook or the future Google are yet to be started because those come in in the
deployment phase because right now I think this things things are still getting built up. I I do think the foundation lab companies and GPUs very much are falling into the bucket of infrastructure.
>> Yeah. I mean it's interesting to watch uh how the stuff is evolving a little bit. So do you remember summer 24 there
bit. So do you remember summer 24 there was a company called StarCloud that came out and was one of the first to come out and say we're going to make data centers in space and what was the reaction when you know
>> people laughed at them >> on the internet. Yes. They said that's the stupidest idea ever. you know, I guess 18 months later, uh, suddenly Google's doing it, Elon's doing it.
>> In every interview now, apparently, it seems to be like his top talking point.
>> Yeah. And so, I mean, why is that? Like,
I feel like one of the aspects is that like part of the um infrastructure buildout right now that's so intense is like we literally don't have power
generation. Boom. Supersonic instead of
generation. Boom. Supersonic instead of making supersonic jets right now is on this good quest to create enough power for a bunch of these AI data centers
that are being built right now. They use
jet engines and even those like are so bad, you know, the supply chain for jet engines to generate power for data centers is so backed up that you know you would have had to have ordered these
things you know two or three years ago just to even have it in two or three years from now. you know, these constraints uh end up like influencing
like fairly directly what the giant tech companies need to do to win the game three or five years out. Like suddenly
there's not enough land. You know, in America, we can't build. The regulations
are too high. In California, we have SQA, which is totally abused by the environmental lobby to stop all innovation and building housing. By the
way, we just don't have enough terrestrially like to just do the things that society sort of needs right now.
So, you know, the escape valve is like actually let's just do it in space.
>> Yeah. Come to think about we we kind of have the trifecta of YC companies that are solving the data center buildout problem.
>> Well, you need fusion energy.
>> Yeah. Yeah. Well, we have the company that's solving the no land problem by building the data centers in space. We
have Boom and Helion which are solving that we don't have enough energy problem. Just fun today, uh, a space
problem. Just fun today, uh, a space fusion company that just graduated called Zephr Fusion. And they actually had a great seed round out of Demo Day.
They're in their 40s. They're national
lab engineers who their entire careers they were building, you know, Tokamax and Fusion Energy. And they came into the lab one day. They looked at the physics. They, you know, looked at the
physics. They, you know, looked at the math and the models and they said, "You know what? If we did this in space, it
know what? If we did this in space, it would actually pencil." And so that's they're on like this sort of grand next 5 10 year quest to actually manifest it
to actually create it in space uh because the equations say that it is possible and uh if they do it it's actually the only path to gigawatts of energy uh up there in space. So you know
it might be you know an even more perfect trifecta uh shortly. Something
else I feel like happened over the course of this year is the um interest in starting model companies like I guess maybe at both ends. There's like the people who can raise the capital to go
and actually try and build a head-on competitor to open AI which there are very very few like maybe have Ilio with SSI but then more so within YC people trying to build like smaller models. Um,
I've certainly had more of those in the last few batches than before, like whether it's sort of like a models to run on edge devices or maybe like a voice model specific to a particular language. And I'm curious to see if that
language. And I'm curious to see if that trend continues going back to this early era of YC. Actually, we sort of saw the explosion of just startups being created and maybe especially SAS startups.
Partly what what um fed that was just knowledge about startups became more dispersed. there wasn't like a cannon of
dispersed. there wasn't like a cannon of library information on the internet about like how to start a startup, how to build software and then over like a decade that just became more common place and that just exploded like
society's knowledge of startups and how to build things and it's may feels like maybe we're going through that moment in sort of the AI research meets like actually building things >> with with training models. I think we
are absolutely going through that right now. Yes. where where it's going from
now. Yes. where where it's going from being a very rare skill set to a more common one >> cuz like open AI a decade ago was like a rare like you needed you need like a a unique combination of skills right you
need like your researcher brain your sort of like engineering brain maybe like your sort of finance business brain >> wait so did you just describe Ilia Greg and Sam
>> you got it >> there was like a rare team right there just wasn't that configuration of team around very much and Now a decade later, there's like a plethora of people who
have like the research background, the engineering background, um the startup capital raising um background or at least going to be taught how to do all of that kind of stuff. And I'm curious if that would just mean we'll just see
more applied AI company starting and maybe there'll be like even more models to choose from for all the various specific tasks.
>> I think so. So I think the other thing that's even contributing and making this a ever even bigger snowball is because of RL. I think there's all these new
of RL. I think there's all these new open source models that people are doing the fine tune on top of it with a particular RL environment and task. So
it is very possible that you can create the best domain specific let's say healthcare model train on a generic open source
model by just doing fine-tuning on it and doing arl it beats the regular big model actually I've heard and seen a number of startups where their domain
specific model beats u openai let's say on healthcare there's this particular yc startup that told me that they collected the best data set for for healthcare care and they ended up performing better
than OpenAI and a lot of the benchmarks for for healthcare with only 8 billion parameters.
>> I guess what's funny is uh you do need to have a post-raining infrastructure.
You we've also had YC companies where uh they had something that beat OpenAI uh you know GPT 3.5 and they were doing
fine-tuning with RL but then uh yeah G GPT 4.5 and then 5.1 came out and uh you know basically blew their finetuning out of the water. have to keep going. Yeah.
>> Yeah. You got to keep going. Yeah. I
mean, you actually have to continue to uh get to the edge. Anything else uh that really sort of stood out from this past year that jumps out to you?
>> Uh it's funny. We started the year with one of our episodes that got a lot of views around vibe coding. I think we were talking about it more as observing a behavior that was happening from our
founders. And I was surprised to see
founders. And I was surprised to see that this became like a giant category.
There's lots of companies that are winning. I mean, we have Replet, there's
winning. I mean, we have Replet, there's Emergence, there's a bunch of them.
>> Verun moan had gone over to Google. He
released anti-gravity. And uh did you guys see the video? I actually am sort of curious whether they actually used Nanobanana or any of these videogen things cuz it's like a little too
perfect. But Google has the budget to do
perfect. But Google has the budget to do the high production value video. But
it's, you know, Verun at the keyboard and then, you know, Sergey is like right behind him. So it was like it was very
behind him. So it was like it was very cinematic. Anyway, I think Sundar was uh
cinematic. Anyway, I think Sundar was uh you know also not only talking about um space data centers. Uh he was also talking about vibe coding and I knew
that I was a little bit trolling back but knowing what we know. I mean yes vibe coding is not you know completely
usable and trustable for uh you know 100% of your coding period like this.
You know, it is not true that you can like shipund 100 100% solid production code today as of 2020 and the end of 2025. Yeah, I was thinking about things
2025. Yeah, I was thinking about things that surprised me in 2025. And I think perhaps the thing that most surprised me is the extent to which I feel like the AI economy stabilized. Like I feel like
when we did this episode at the end of 2024, it felt like we were still in the middle of a period of incredibly rapid change where the ground was shifting under our feet and like nobody knew when the other shoe might drop and like what exactly was going to happen with
startups and AI and the economy. Now I
feel like we've kind of settled into like a fairly stable AI economy where we have like the model layer companies and the application layer companies and seem and the infrastructure layer companies seems like everyone is going to make a a
lot of money and there's kind of like a relative playbook for how to build an AI native company on top of the models. I
feel like things really kind of matured in that way >> which feels is all downstream of like the models themselves have incrementally improved this year but there haven't been like major steps forward that have shaken everything up which is has a
knock on effect. Many episodes ago we talked about how it was felt easier than ever to pivot and find a startup idea because if you could just survive if you could just wait a few months there was likely going to be some like big
announcement that would completely make a new set of ideas possible and create more opportunities to build things. It
certainly feels like that has slowed down and so like finding ideas is sort of returning to sort of normal levels of difficulty in my experience in office hours.
>> I agree.
>> I'll tell you what's not a surprise. Do
you remember that report AI 2027 where it was just sort of like this doomer piece that said like oh well society is going to start falling apart in 2027.
But you know at some point they quietly revised it to say that it wasn't 2027 but they kept the title. Maybe it's not a surprise. Like I was always a little
a surprise. Like I was always a little bit of a skeptic. um of like this fast takeoff argument because even with the
scaling law, it is uh log linear. So it
is slower. It requires like 10x more compute and it's still sort of, you know, topping out, right? And uh that's one form of good news. Another form of
it's weird to call this good news, but human beings uh don't like change.
in our previous episode where we sort of blew up that uh MIT report that said that you know 98% or 90% of uh enterprise AI projects fail. Well, it
turns out that 90% of uh enterprises don't know how to do you know it, let alone AI. It's weird to say that that's
alone AI. It's weird to say that that's a good thing, but in the context of fast takeoff, like that is a real break on the ability of this new really insane
technology from actually permeating society. I love to accelerate but like
society. I love to accelerate but like it's weird to say like oh well actually in this case maybe that's a good thing right like it is a shockingly powerful technology but you know between being
log linear scaling and human beings really don't like change like organizationally speaking society will absorb this technology everyone will
have enough time to sort of process it like culture will catch up governments will be able to respond to it not in like a frantic SP 1047 sort of like, you
know, let's stop all the compute past 10 the 26, right? Like just these knee-jerk responses to technology. We're excited
about um the ARC AGI prize is uh you know going to come in and do the winter 26 batch as a nonprofit. The funny thing about that is like yeah, maybe there's
um a team right now that is climbing the leaderboard of ARC AGI and they're going to accelerate this thing again. And
something that surprised me sort of relate to that with the startups is I remember around this time last year we were talking about how companies are getting to a million dollars AR and raising series A's without hiring like some cases not hiring anyone just the
founders maybe hiring one person which just felt very unusual. I feel like a year on that hasn't translated into okay and then they went and hit like 10 million
AR or like they they scaled without adding any more people to >> No, they turned around and started and started hiring like actual teams. >> Yeah. like post series 8, it actually
>> Yeah. like post series 8, it actually largely feels like the playbook is the same and the companies might be smaller for the same amount of revenue, but it feels it's entirely because they hit the
revenue so fast and there's still just bottleneck on how long it takes to hire people versus they have demand for less people.
>> I still think there is like a you know some effect but it is not like open and shut. It is not like you don't have to
shut. It is not like you don't have to hire executives anymore. I think they're like there might be a case of two fuagra startups like one being Harvey and the other one being open evidence right
Harvey the founders are incredible uh they were you know very early and then there's this sort of idea of like for VCs you could just go down Sand Hill Road and like the fixes in like you just
sort of block out all of them and then all the people you know there maybe 30 people who could write checks of like 10 to$und00 million and if you just sort of get all of their money like there's sort
of no one who can actually come in and do the next series A and then basically you're safe like you have capital as a budgeon is capital as a moat in that case right so yeah Harvey is interesting
because you know uh Lorra is coming fast for them and obviously we have some skin in the game on Lora but we think that they have uh as good a shot at any >> I guess that's one trend that we saw in
2025 is that there was like a first wave of like AI hative companies like Harvey >> who might have wasted a lot of money on finetuning actually >> totally that like broke out really in 2023 and kind of did a victory lap that
you know oh we've won the the space and now we're seeing a second wave of companies like Lora and Giga and it turns out that like oh actually like it isn't so simple.
>> Yeah. The weird beneficiary of um you know burning some non-trivial double-digit percentage of your capital stack on fine-tuning that buys you no advantage is like basically the
investors are the only winners there because they just own more of your company, you know.
>> Yeah. Yeah, at least as it relates to like the the hiring and team size, I feel like of the two camps, one being the AI is going to make everything more efficient. You will need less people and
efficient. You will need less people and the other AI is going to reduce the cost of like producing the time to produce things and so then the expectations from your users and customers will just go up and you'll need to keep hiring more
people to satisfy like the growing expectations. I feel like this year has
expectations. I feel like this year has been more in that second camp and I think that is what's driving the fact that the companies are still just hiring as many people as they were preai. is
just like the bar for what the soft what their customers expect and they're all in, you know, like Lora's racing with Harvey, Giga's racing with Sierra. Like
they're all still competing for the same set of customers and they still ultimately are bottlenecked on like people and like I don't think anyone's bottlenecked on ideas, but they're bottlenecked on like people who can
execute really well. I think that's like still it's exciting feels like an exciting phase. I agree with you that
exciting phase. I agree with you that like the uh era of the one person running a trillion dollar company is not here.
>> Not yet.
>> Yeah. But I think it's going to trend that way eventually. That'll be a wild time. Maybe that's a prediction for
time. Maybe that's a prediction for >> 2026. Yeah.
>> 2026. Yeah.
>> You think it's coming in?
>> I mean, I don't think it'll happen in 2026 either, honestly. I mean, I think you will have many stories of companies run by, you know, under a hundred people that are making hundreds of millions of
dollars. So I mean Gamma was interesting
dollars. So I mean Gamma was interesting to see like uh one of the biggest things that they said in their launch that I think is a very good trend is they said they got to hundred million dollars in
ARR with only 50 employees. So which is a very different it's you know such an inversion right like normally you have the big banner and the like little X thing you know image and it's like oh
yeah like we raised all this money and look at all the people who work for us.
It's a good trend to have the reverse flex which is like look at all this revenue and look how few people work for us. Well, that's all we have time for
us. Well, that's all we have time for this time. We just wanted to wish you a
this time. We just wanted to wish you a really happy holidays and happy new year from all of us to you and yours. See you
next time.
Loading video analysis...