LongCut logo

Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das of Menlo Ventures

By Latent Space

Summary

## Key takeaways - **Glean's Hard-Earned Enterprise Moat**: Glean built a robust enterprise search foundation from 2019 by tackling unsexy problems like data integrations and ranking without AI hype, creating a moat that competitors now struggle to replicate despite tacking on search features. The hard work in solving enterprise-specific challenges like low query volume and freshness has positioned Glean for expansion in a large TAM for knowledge workers. [03:00], [05:22] - **Anthropic's Explosive Revenue Growth**: Anthropic grew from zero revenue to $1 billion in one year and is projected to reach $9 billion this year, making it the fastest-growing software company ever at the billion-dollar scale. This legitimate revenue trajectory, beyond GMV hype, has turned it into a generational company beyond initial expectations. [17:15], [18:04] - **Enterprise Search Ranking Challenges**: Unlike consumer search, enterprise search lacks sufficient query volume for feedback-based ranking, with more freshness-seeking queries and long-tail distributions, requiring entirely new signals and evaluation methods. Teams often struggle to understand domain-specific queries and results, making effective ranking a huge unsolved challenge. [11:36], [12:55] - **Claude Code's Bold Innovation**: Claude Code represents a critical in-product innovation by enabling agentic coding in a terminal interface, which no PM would design, marking the first practical end-user agent beyond chat and RAG systems. This weird, non-intuitive approach showcases Anthropic's ability to innovate at scale, driving value in coding tasks. [19:12], [19:53] - **Anthology Fund's AI Ecosystem Seeding**: The $100M Anthology Fund, set up with Anthropic, invests in 40 companies across infra, research, and devtools like OpenRouter and Goodfire to build a developer ecosystem, with higher graduation rates to next rounds. It allows fast participation in early deals and close ties to Anthropic without corporate VC misalignments. [42:45], [44:00] - **Mechanistic Interpretability's Future Need**: Current AI models are black boxes with empirical evals, but for critical decisions like loans or legal outcomes, society needs intrinsic understanding of model outputs, which mechanistic interpretability provides by analyzing weights to detect sycophancy or deception. It's like brain surgery for LLMs, essential for safe large-scale deployment in 2-3 years. [49:38], [51:37]

Topics Covered

  • Can meritocracy unlock hidden talent from non-elite backgrounds?
  • How unsexy enterprise search built a $7B moat through hard work?
  • Why AI labs avoid the drudgery of true enterprise search?
  • Does massive compute investment guarantee smarter models?
  • How will AI erode the craft of deep engineering thinking?

Full Transcript

I entered venture and I'm like that is the company I would have built from 2019.

I remember going to parties in the Bay Area and I would say it's shutting down the conversation right there.

Anthropic is the fastest growing software company of all time.

When we invested in the company uh it had no revenue in India. Academics holds the the same sort of prominence as sport would hold in America. On average people are quite poor. So education is seen as the means to social mobility. The way it works is similar to countries like China or some other countries where you take a big exam, you get ranked. Um a million people take the core engineering exam and the top 10,000 get in and the top 200 get into computer science.

That's how hard it is. Those top 10,000 get into IIT.

Everyone's heard of that.

That's like where a lot of the great, you know, Silicon Valley people from Sundur to many other people come from.

You look at a guy like Rahul Patel who's become the CTO of Anthropic and he's not from a top university in India and he sort of worked his way up to a position of such prominence. It's testament to the fact that even though you didn't have the opportunities early and even though you might not believe you could do it, if you work hard enough for a long time on things you care about, anything can happen.

Hey everyone, welcome to the Litter in Space podcast.

This is Allesio, founder of Kernel Labs, and I'm joined by Swix, editor of Laid in Space.

>> Hello. Hello. And today we're finally joined by the epic return of DDas.

Uh, welcome back.

>> Thank you for having me, guys.

I'm so glad to see you. All of us have different jobs now.

Actually, >> all different jobs. All different jobs.

Classic Bay Area. You know, it's been two years, right? So, last time it was April 2023, you joined us uh remote and you were still at Glean back then.

>> I was actually even also looking at the Claude timeline.

Uh, so cloud one was March 2023 and claude two was July 2023.

It just feels like so long ago.

>> Man, I remember the time when I don't know when what your first experience using Claude was, but but mine was I remember early Glean there was uh somebody from the company was like, "Hey, there's this interesting new LLM that's not OpenAI and the only way you can talk to it is by tagging Claude in a Slack channel.

" And that's a bizarre interaction model for a whole new product.

>> The best model.

>> And uh and now fast forward to now and I'm like okay >> it's we've come we've come quite a way.

>> Yeah. I think actually they only recently introduced cloud in Slack, right?

Or >> like publicly >> come back. The comeback.

>> Yeah. Yeah. Yeah.

>> It's like how it started and now cloud is in Slack. Cloud and Slack.

And so since then I wanted to start with Glean obviously because of you know we uh we're going to cover a lot of startups in this episode. So Glean has Glean was like a billion dollars I think as based on my research and now it's at $7 billion.

So your your your options are good.

What's your take on like how Glean's going and the market in general?

>> I would say that now being on venture side, I have a bit of a a different take than I would have had at Glean. But broadly, one of the things that I love about Glean is it's such a boring unsexy company that became sexy later.

So from 2019, I remember going to parties in the Bay Area and I would say enterprise search and it's shutting down the conversation right there.

You know, like nobody would ever ask a counter question if you said enterprise search.

They're like that sounds boring as hell. Leave me alone.

Like um and and fast forward to 2022, enterprise search gets more um got more conversations.

I was like interesting.

Tell me how you're doing this this search.

I think but what what was nice about that observation is in those three years we did a lot of work and not didn't take shortcuts on a lot of things that ended up generating a lot of value for us now and I can go into what what all of those things are but if you look at glean from a high level business it is top-down enterprise sales it's very hard to rip and replace we expand contracts very easily because the TAM is so large it's every knowledge worker could use a version of enterprise search and then the AI on top I still call it search but information retrieval in the enterprise and we've we solved a lot of critical problems I can go into that too in order to get there then comes you know December 2022 the chat GPT moment and everything that's happened since and now when I look at Glean you know it's a different world we were very quick and correctly prioritized LLM earlier on it did a lot of good for our business and the company But now that there's fire from a lot of angles like everyone wants to be a part of the enterprise search story and it makes sense.

I mean it's a large unconstrained ham.

LLMs are particularly useful for gathering information.

Obviously consumers are interesting and enterprises therefore interesting.

How do you do this in enterprise?

Well gather all the knowledge and then put an LLM on top. So that being said I'm still very happy with Glean stock.

You know Glean's also valued at 7 billion not 100 billion.

So I'm I I think the company has a lot of growth. I think it's done a lot of the hard work that nobody's willing to do. And I also think, you know, VCs have a tendency, including myself now, to to trivialize a problem into a one-s sentence sort of narrative.

And with Glean, that narrative was often, oh well, you guys built this enterprise search thing, which never worked, and then AI came along and it started becoming a thing, which I think is not the the story at all. I I really think we did all the hard work to build search and AI happened to accelerate our go to market motion at the right time and now I see companies trying to tack on search.

It's not easy. I know the kind of like last mile stuff we did for some of our customers and I just know that when I think about other companies I'm like would you really go all that distance?

It's not a moat. The moat is just we did the hard work and so I'm pretty happy.

I mean things can go any direction but I'm pretty happy with the the way Glean's going right now.

>> And just to spell out the two main challenges.

So one is obviously Claude I think today launched enterprise search.

>> I was going to say I have screenshots.

>> Did you see like hey we're introducing enterprise search I'm like yeah >> son of a gun. Uh and then the other side you have the data providers adding this rate limits kind of like Salesforce has done with Slack. It feels like that part is more challenging than like the competition from other companies like yeah any how do you think about that?

>> Two questions I guess competition and the rate limits uh on the rate limiting side is happened for for several uh of the of the SAS tools. I think one advantage that Glean has is well the first thing let me address the premise of the argument. When I think about why SAS tools would limit API access, inherently it never made sense to me.

I can see why you do it for business reasons.

Maybe you want to launch a competing product, but Glean doesn't eat into your revenue. If you are Slack and you've sold, call it a 100 seats at a company and you have Glean at that company.

If anything, Glean is only shows Slack results to the 100 seats that you've sold. So, we aren't eating into your business. So from a primary first principle is business logic.

I don't see why you'd do it. If Glean is on Slack and more people are searching through Slack, it actually lets you sell more seats, not less cuz we don't reveal permissions to people who don't have access.

If we were to do that, then I could see maybe a business case like, oh, you're taking the Slack data that I've only sold one license for and you're showing it to a thousand people.

That's problematic, but we're only showing it to the licenses that you've sold.

So firstly, that's my first point.

The second thing is we do have thousands of integrations and in a lot of enterprise customers Slack is is is really important and that's that's that's a critical data source but we also have many many more and so I mean um it's just you know the law of large numbers so maybe if everyone decides to shut it down it could be more problematic but if one person does then um you know less and the third thing I'll say is if you talk to the customers They're also super unhappy about this cuz they're like, "Look, we bought your product, we own the data. You don't own the data.

" And so if we want to to buy another product to use our data in Slack, why can't we do that? Why are you blocking the API? So those are the three prongs of the argument. I can't I don't know how this will all end up, but uh I don't think it's that sensible that it is uh like this and and I'm still optimistic that we'll clear out some of those issues.

>> Yeah. Um anything else you want to say?

So like let you know obviously we're about to move to enthropic and Anthropic just launched enterprise search and so what would you say as a veteran of enterprise search that Anthropic should like you know take uh take note >> the question of of the labs competing with Glean has always been a thing since 2022 >> Sam Alman like we were just discussing earlier like Sam Alman once came out and said if you're investor in open AI and one of these five companies including Glean we don't want you as an investor >> and um but yet here's what I see look at the revenue of Anthropic and OpenAI right now.

These are billion dollar revenue scale businesses.

Glean is several hundred million revenue scale business.

So the way I think about and this can even allude to like how I think about startups right to compete and right to win is for anthropic and open AI to build a deep enterprise search system.

It doesn't make them that much money.

They have to put all this effort to make what an incremental 100k sale, 200k cell, maybe even a seven figure cell.

Is that moving the needle on your, you know, five plus billion dollars in revenue or 10 plus in to the end the end of the year for OpenAI? Not really.

And the amount of effort it takes to get there is big sales teams, huge FTE teams, tons and tons of customization.

And my question is like in the long long run you could build a semi-reasonable enterprise search tool. If you really want to go deep I don't think you will ever dedicate the people to do it.

And the last thing I'll say is you think of it from an anthropic engineers perspective.

You joined a big AI lab to work on models not to build Google Drive connectors.

Right.

>> A meme like you know I build the [ __ ] integrations.

>> Build the integrations.

>> I think I think I'm I'm still very bullish.

Um but yeah, competition happens.

So >> yeah.

Yeah, it's actually I wasn't asking about competition. It was just more about uh what are the hard problems that people don't appreciate.

>> Oh, okay. We can talk about that.

>> That was probably safer category for you.

You know, basically like uh you know and I'm in this boat as well.

I've joined an enterprise AI company that has to worry about and and build for these issues.

And I'll just give you one very example.

Until this point, we never had to deal with two slacks like and and enterprise has like, you know, when you when you acquire another company, you have different systems and they all duplicate and they all overlap.

>> Yep. Oh man, I have some great stories about >> Devon, you know, like I'm sure like there's like some pro- user version of this, but I still haven't figured out how to use Devon properly with two Slacks.

>> Wow.

>> Because Deceases like that reminds me that was a thing that we had to address at Glean.

I think like every enterprise company has like the same sort of hurdles.

>> We looked at each other. We're like, "Oh yeah, we're a real enterprise now.

We have two two of everything.

" Like >> that's funny. Okay, Glean.

Bunch of interesting problems. I'll I'll talk about some of them. If you want to prod feel free.

I think number one most interesting to me when I joined the company was consumer search was largely regarded to be a solved problem.

Not really, but largely. The way most consumer search systems work is by aggregating feedback data on how users use search, whether they click, hover, how long they stay on a website.

And that's what powers ranking systems to get better over time. Very very powerful critical way of how like Google, Bing, and all the above work.

In enterprise, if you take a 10,000 person company, even if every user issues two search queries a day, which is quite a lot, say even five, I don't know, that's just not enough volume to have any meaningful quantity of feedback for this to be relevant.

On top of that, add to the fact that freshness is way more critical in the enterprise in certain ways than it is in there are more freshness-seeking queries in enterprise than there are in consumer.

And then number two is the distribution of queries in consumer is very headheavy.

It's not in enterprise.

In enterprise maybe the query that everyone wants to search for is benefits or payroll.

This is not that useful really like it's every person's doing a job and they have different needs and they have different things they want to look up.

So given all of that the techniques behind the hood under the hood that work for consumer they don't translate to enterprise.

You have to invent a whole new set of signals that actually makes enterprise search work and evaluation becomes very very difficult too on consumers who have tons of data to pick and choose how you want to evaluate what's the right results to show for this query in enterprise and I have this story a lot like we look at some of our customers data and we would look at each other and go like we don't really understand what this query means we don't really understand what these results are we don't know what is the right ranking or not we have actually no idea what we're doing So and which happens like it's so out of domain for for even us some of our customers are working on very very specific problems and so all of those that's one huge huge challenge how do you make ranking work in enterprise in a in a in a great way there's many I'll touch on the more second interesting one second interesting one is selling productivity tools to enterprises are challenging because as no matter what ROI argument you make people aren't actually buying tools for ROI people buy productivity tools because their users like using them.

So, for example, when people buy Slack, I don't think any buyer is going like, "Let's measure how much faster our how much more productive our our team is getting by using Slack.

It's probably not even getting that much more productive.

" That's not what they're looking at.

They're kind of saying, "Everyone uses Slack.

It's pretty useful. I'm going to keep Slack.

I don't think we're going to chair that one.

" If you take that analogy to search and and search systems, the issue is search systems aren't inherently viral or growthy.

Slack has a very clear virality moment.

Like everyone's talking to everybody else and so that's just how you have to speak. In search, it's kind of a oneplayer game. You're not really sharing things.

You're not really talking to everybody else.

So the challenge for us was like how do you get sell a productivity tool by getting everyone to love this on day one for a product like search? It's not easy.

If you look at how Google did it, they had Chrome.

So great, like have a great source of sense of distribution, get everyone to like query and then they'll learn to love it hopefully. So we had to figure out what that meant in the enterprise as well and how to get everyone to like adopt and embrace and love this new tool.

>> Yeah.

>> So two two of the many >> good pointers.

>> Yeah. Just a question on that.

Was there any because you know, oh you have a new search tool.

It's like go search and it's like what am I searching you know like what was that blank canvas on boarding for people? Anything good?

several different things worked well for us.

Uh I can think of two at the moment but I'm sure there were many many more.

Um I'll say one of them was say for for a handful of companies like many companies actually we would say we want to take over your new tab page >> and then the critical part was tell us what we need to do to earn the right to do that.

No one wants to give away their new tab page. So so so we went the last mile and there were companies who were like well we have a new tab page.

We're pretty happy with it. So we'd ask, "Do you have a search bar on it?" And they'd be like, "Well, yes." I'm like, "Okay, what is what is that using?" And they'd be like, "Well, it's using our internal thing.

" I'm like, "Do you like it?

" Clearly not. That's why you're talking to us.

So let's just rip and replace that.

But doing that extra mile was pretty important.

So that's one new tab.

The second one that we liked was uh Chrome extension and then doing the I forget what we called this, but when you were on your native product and you were issuing a search query, we ran a lot of eval.

And we thought we were better at every product at their own search.

So if you were searching on Google Drive, we will do a glean replace of the search bar and the page pretty natively and uh it would teach people it would teach people to use glean and be like okay this is pretty useful. I think these results are great and it automatically filters through Google Drive anyway.

So you know functionality is not lost and uh we would slowly get people to be into the ecosystem that way.

Yeah, superset adoption. Uh something that open router also does.

Okay, so anthropic we have to obviously address the elephant in the room.

You guys are huge huge anthropic investors.

I think right after you maybe got promoted or you became a partner you you guys led the D.

What's the chronology that?

>> I think we did part of the C and then the D and then every single round we had more than >> Pa.

Yeah.

>> Obviously one of the greatest companies in in AI. I honestly had no idea that it would like we would be sitting here like Entropic has 10xed in the time that you've been at Menow and it like I just what's it like being an anthropic investor?

What what do you think about what are considerations back then versus now?

>> Anthropic is the fastest growing software company of all time? I think I can say that fairly. I'm I haven't been disproven yet.

So I think the >> people say that but like everyone says like you know we're like first to like 1 billion, first of 100 million.

I don't know. It's it's it's hard to tell but >> I do believe the the numbers are zero to 100 in one year, 100 to billion in one year and this year it would be one to the projection that is public is nine.

But even even to this point like I know a lot of people we've seen the graphs on Twitter a lot of some of that is [ __ ] some of that GMV all this other stuff but in anthropic case I think it's like fairly legit revenue and I do think it makes it the fastest definitely at like the 1 billion plus scale.

I can't think of too many examples.

So clearly has outdone itself. I would say that when we invested in the company um it had no revenue. I mean that that that's just fact.

So when we wrote our first investment it had no revenue. It was a >> $18 billion or 4 billion.

4 billion >> right? It's been fascinating to see this company succeed.

I I couldn't have predicted it.

We all of us this was beyond our wildest expectations.

I think whether or not it continues to perform at at at this rate, I believe it will, but it is already somewhat of a generational company in in in many ways.

And so it's kudos to the team to deliver like this these these these awesome results.

You know, one of the risks I would say like kind of taking a tangent, one of the risks with a company like Anthropic is you essentially had a team of extremely idealistic researchers and very often you know the the standard deviation of outcomes when you have teams like that or similar to that are is is quite large. There was a world where maybe they would have not worked at all and would have absolutely fizzled to the ground. But I think it is the same qualities that would make them have a high propensity to fail made them had a high propensity to to succeed.

And if you look at there's many other things they did right but if you just look at a product like claude code there's not many in product innovations in AI that I can think of that are so critical as something like that cuz we had the whole chat era of rag systems and chatgpt that was a critical innovation but since then there was a lot of followers a lot of deep research which is kind of I would say an addendum couple of other things happening here and there agents Cool.

But you know, if you think about agents that actual end consumers use and gain value from, in in my mind at least, cloud code was the first time I saw that in a terminal in a weird interface.

It was just weird. Like it was like every PM's nightmare.

No PM would have thought of that.

And so it's >> except for Catwoo.

>> Yes. Except for Catwoo. And so, you know, it kind of gives I I it goes to show how um Anthropic is able to function as a company to be able to innovate like that, which uh is is is quite rare, especially for that scale.

>> To some extent, I think you just like hire good talent and then like let them loose with a lot of tokens, see what they come up with. They they they tend to build good stuff.

>> Well, like it's interesting to talk about, right?

Like cuz take OpenAI and Deep Mind as a comparison point.

Like I think we'd all agree they all have great talent but they all don't innovate the same way and it's always been interesting like just as an academic exercise to to think about like different leadership styles and maybe from the outside looking in you'd be surprised how little I actually know from an investor standpoint about how anthropic actually operates but it seems like it's a company that has you know such high retention numbers on employees because they are very freespirited in how they let the employees guide the direction of the product versus other companies which are much more either top down or prescriptive or like hey we need to go after this and we need to go after that.

It's like hey let's see let's see what happens try.

>> Yeah I think um at my last conference uh signal fire had some stats they track all the uh LinkedIn pages of everyone and like topic has like the best retention and like the it's like a a net gainer whereas every everyone else is like a net donor of employees to to something like that. I'm referring to the exact same uh article where I think their retention one year retention on employees is 80%. Which in AI world is is quite wild.

>> Yeah. And I mean Entropic does not have image generation.

They do not have a IMO goal winning model. I feel like they don't they just do their own thing.

They do it great.

>> They have nice hats.

>> Yeah. They sell out thinking caps.

So actually I really wanted to discuss this but I don't know how to I think I I need to get like some like marketing PR agency person because uh people actually forget 2024 they had out of home advertising campaigns which sucked.

Everyone was like dog piling on them and then this year it's it's like slightly changed.

It's still anthropic butthole but like slightly like and but they just they decided to focus on thinking and like suddenly everyone loves them.

Uh and they have like the cafes and all that.

Like it's it's a very interesting public image rebrand and I don't know if it's because the models are just better or it was actually like PR like which one comes first like chicken or egg like models or PR?

>> It's a good question.

>> Yeah, >> it's a good question. I would say though like ignoring the model side like like I do think this one is like aesthetically better.

>> Yeah. Purely like purely it looks nicer.

>> Yeah. and and the vibes and I don't know I have sat in those meetings and it's like like someone's pitching you an idea and you're like I don't looks good okay and then like it becomes the one of the most hated campaigns of all time and then one year later someone else comes with like a slightly different looking idea and it's like four the words are like different in like four ways like they they chose like slightly different words but it's not that many words and suddenly that one is the one that works >> well as as somebody who like writes online a lot I can relate to Like a couple of things different can be the difference between something people care about and not. Yeah. Early like in Glean we had I had such run-ins with marketing cuz the first campaign we we actually did this campaign I was just like really AI for work that works.

>> Okay.

Like >> was that a hit?

>> No.

I mean >> in enterprise like how does one even measure what is a hit? What is not?

I mean no one really cares enough I feel one way or the other. Um, but you know, you we've all seen like really cringe AI ads.

If you've seen the Cisco ad in the airport, I hated that one for a while.

>> All kind of generic. So, I like Anyway, I like the anthropic one.

>> Okay, I'm going to sprinkle in some of your tweets.

So, you had you had one ad about the the billboard where uh the the Reddest guy was like, "My boss really wants you to know that we're an AI company.

" I thought that was the single most honest billboard I've seen in San Francisco.

>> Absolutely.

I think the it's like the testament to all the comments of people going like yeah I relate. Um I mean we've all heard it like everyone it feels like even on the technical side people are struggling to catch up gain a sense of meaning again.

I've had developers go like [ __ ] man like is this it? Like what do I do anymore?

And even that's happening on the technical side of people who semi understand what's going on.

On the non-technical side, people are like, "So there's this new thing. It's AI and generally my boss literally just wants me to do something in it and I don't really understand other than chat is quite helpful.

" >> Yeah, I have some charts. Uh I don't know if you like uh have any of these like in in mind, but I'm just going to sort of bring up some of the anthropic charts which I think it's just I want to just put it on the record for people who are not paying attention to to understand in 2023 according to these are Menllo numbers, right?

uh 2017 market share for openi was 50%.

And when uh mid 2025 you guys have openi at 25% market share an enthropic was at 12 now at 32.

>> API enterprise API market share >> correct.

So I I should clarify that that is enterprise LLM API spend >> the market that anthropic happens to focus on.

Yeah.

>> Um and and critically it's also spend numbers not token numbers. So I think those clarifications are are important and also the methodology is uh going and surveying you know vast amounts of um enterprise users on how they are doing their spend >> but that being said yes the point the point is remains >> uh market share of opening has gone down it's not a negative obviously opening has done super well it's just that diversity has gone up like it used to be there was basically only one choice and now there's like three or four like legit Frontier labs maybe more than that if if you count like all the open models as well but I think it's just super interesting and uh under discussed still that you can actually build like a sustainable uh advantage as a as a frontier lab.

>> You know, I'm sure you guys remember like there was a lot of conversation at some point about the commod commoditization of models and to an extent maybe it's happened. I mean like models a lot of the frontier models are neck andneck on a lot of things.

Um but in practice and this this data was in that market map of that market survey as well is that once people like something and they get used to it they don't really churn off it once it fits their needs.

And so we've seen a lot of that. So there's a lot of like churn and hobbyist developer type category.

But in terms of enterprises, often what'll happen is they'll buy up large chunks of long-term compute and dedicated instances in which case you just don't churn, right?

Like this is this is what you use. So I think that's part of the effect and and you know to commend OpenAI like OpenAI was just focused on something else which is you know they have they've launched the most incredible consumer product that we've seen since god knows when.

So you know so they were probably not focused on enterprise until now again.

>> Yeah. How do you reunder the company internally as you invest? So I mean even since we talked about clock code, right?

It's like I think that was like a pivotal moment in like the trajectory of Entropic.

What are the things that matter to you when you're like looking at a company like Antropic?

Like does this market share number matter?

Like how do you evaluate both the opportunity and like what are the numbers that you really care about versus like sure higher market share but like that's not what we cared about. I don't think the market share number is the market share number is more import is is more critical to understanding the TAM at that stage to be very honest with you at the stage that we invest in anthropic now like the only things that would really move the needle on the decision is uh here's the revenue here's the margin and here's the trajectory and here's the other markets we may be able to underwrite that they want to go into that they may be early in or planning on on going into I I think it's really difficult to underwrite on on market share other than knowing what like the potential cap of the TAM might look like.

So the Pi will also expand potentially, but other than that, I don't think it's a it's it's just like a it's a nice vanity metric more than uh more than anything else.

>> Yeah. In your mind, is it kind of like, you know, people in crypto are always about the flipping of like Ethereum and Bitcoin?

Like is this something that matters like Entropic can go to 50%.

Or is it Open AI was only at 50% in a moment in time which was a new market.

Like yeah, I'm curious how you think about that.

>> I don't want to color like the way anthropic probably or the way all of us think about this, but I just don't think it matters that much. In my view, I'm a very paranoid person with startups and companies and technology. And so in my view, I'm like, great, now let's make it last.

Or like great, but what's next?

And so to me, it's like nice to have.

It's really not um I mean look if we're investing in a round right now which is like north of 170 billion sure it matters some of the numbers matter but the future of the company is is all the value is really in what we underwrite as the future and the future means that I'm more concerned about what's happening next.

What are the new models?

How do you gain market share? What has to be done?

What are the new products that that are going to be built?

I'm less concerned about like where it's at right now in terms of market share.

But that's just me. Yeah, I don't want to speak for others.

>> Yeah, I think the new models are are really good.

I mean, uh, Opus 4.1, Sonnet 4.5, Haiku 4.5 all all released in the last few months. Uh, and it's it's really interesting. I think OpenAI and Gemini are in this sort of price war a little bit with the the Pareto Frontier that I I track uh in terms of like LMIS versus uh the pricing and Claude can still charge a premium but still like have a lot of market share obviously and I think like that's just because they have a better model and like people just naturally gravitate to it especially for coding but also other things and um I I just think like articulating what makes a model good is just very very difficult.

Obviously, this is benchmarks and evals and everyone has like okay today it's your turn to be best at Sweepbench and then like tomorrow is my turn. Uh but like it's it's really stupid like we're we're just like talking about like you know 0.12 differences in in like sweet bench but I wonder you know if you're talking about like okay I am investing $13 billion in anthropic for series F to underwrite cloud 5 right what what does it have to do like what kind of what kind of conversation does that look like I have no idea I'm not saying that you know but I'm just like >> I would say that despite what you said about the premium I think it's everything you said is Um, I still do worry. I think cost is is a concern for a lot of people and so the per the purto frontier does still matter.

I'm glad anthropics where it's where it's at right now, but who knows where that changes. when it comes to like cloud 5 and thinking about the future.

One thing I think about actually that's really nice is I think we can take for granted right now that furthering the intelligence of models and chat GPT a consumer product does not lead to more users or more retention.

It only is really applicable to a thin slice of users who care about very smart type queries.

Right? And I would say maybe like under 10 million, right?

Maybe that's just a random estimate, but most of the 800 million users on Chat GBT are asking like, "How do I fix my dishwasher?

How do I like like rephrase this email that I have sent to somebody?

" And that's done.

Like, we know how to kind of do that.

So, what's interesting there is now that means we're at a point in consumer where maybe it's too early to say, but Open AI is kind of won, right? Like, how do you catch up to something where model quality is not going to be differentiated?

You already have the users, you already have the retention, you already have great product, and people are paying. But the the interesting about anthropic is if you look at coding that's probably never going to be the case. Like there is always an increasing frontier of how you good you could be at a task like that.

And we're nowhere close to that frontier.

So it's more possible to underwrite the quality of the future models versus like an open AI where it wouldn't be as much of a revenue driver on their consumer business than as it would be for anthropic.

>> Yeah. Talking about coding, let's let's just like talk about it because I think like this is also a very fun fun discussion.

one there's like the the what are the margins of cloud code uh which there's some numbers I I I I don't want you to to get yourself in trouble but then there's also like how do you think about the cloud rappers right and uh there's we've we've talked to bolt and lovable but then also like I'll put cognition and and cursor in there as well right like how do you think about this market of like basically there's a whole ecosystem of startups they have all done really well built on top of cloud >> I think it's great I mean there's >> sustainable was it?

>> I I don't see why not. I mean I don't I kind of will allude to the margin question which is like can can Anthropic continue to do this strategy which you know I'm not going to comment on the margins but like if you are trying to build out a enterprise friendly business there's like two broad approaches right like high customization and high price which is usually less scalable uh and then you have low customization low price which is very very scalable and I in a SAS world I guess it's a slack palunteer continuum And so this is kind of different but generally anthropic wants to play here where scale fast keep it cheap get everybody on it.

If we trust that most people or a significant number of people will stay on claude if they continue to build products on top of it then I think that's a win for the ecosystem and it's a win for Anthropic.

I don't see why they would care.

I think the interesting thing and again I don't know what Anthropic's future plans are but like you know Ben Thommpson obviously talks about this is classic strategy which is every time you own the I guess the means of production you will end up getting into the markets that your users >> use you for. And so the classic Amazon example which is like first you are the market where people sell you find all the places that you can sell things that are commodity at high volume and then you start creating batteries and Amazon branded batteries and then you push out a bunch of people who sell batteries.

So that that's a risk I think for those companies that use claw heavily and rely on clot to think about. But at this point of time or too early like I don't think anthropic is anywhere near thinking about that cuz you're still very much competing with other models on on that layer.

>> Yeah. playing a different game.

>> Yeah.

>> Yeah. It's interesting like would you rather be an investor? This is basically model layer versus app layer.

So far model layer has won and I think there's been there was this there was a kind of an app layer summer and then now now it's like very back to models again.

>> I mean I I like the discussion.

I like the discussion because uh I I was I I was at a dinner where we where like somebody was talking about this kind of question and I was thinking about it more I just at that dinner and maybe this is this is an illformed thought so like feel free to push back.

>> Yeah, we're riffing. But when I think about like moes, it's a classic like VC startup banter in my mind, I think the moat is what is the hardest to do in any part of the stack. And so when I think about people that like tend to dismiss, there's other con like aspects to it too, but people tend to dismiss like oh you know the app layers will capture all the value.

Well, if the app layer is easier to build, I think the model layer is harder and therefore will naturally capture all the value net of competition from other model providers. So, set a different way, it is far easier for anthropic to try to go into one of the spaces of the apps than an app to try to go into the space of anthropic, which makes me feel like one is more defensible than the other. Um, all else equal.

So I think both can thrive and that's ideally what everybody wants.

But um yeah.

>> Yeah. I think very brutally as an investor and as a human with my own like time limited time on earth uh you know if Anthropic can go from three uh $4 billion to $183 billion in two years uh then everything everything else is a waste of time. You know what I mean?

Like so like uh I you kind of like do want to like really get this right.

you you can't you can't just be like oh like every everyone's great and like you know and and sort of hedge your bets like sometimes you have to go all in on the right thing and you spend a lot of time and effort identifying the right thing and so yeah that's that's where I'm what I'm trying to do more of these days.

I think the means of production thing is interesting because cloud code only makes sense to be built if it's like the best thing right because if cloud code is like mid they're better off promoting Devon and cognition to sell more tokens >> so I'm curious like as the market gets more competitive on one way it's like well we don't want you to use Devon because Devon supports all the models and so we end up losing some of the revenue but I think there's right now cloud code is obviously the best way to use the cloud models so it drives the usage, but I'm curious in the future there's going to be more pressure on like, hey, this product actually needs to be great to make sense for us again to invest our resources into building it.

>> Yeah. So, so going from model lab to model lab plus product company, right, which is what OpenI has done.

>> I would push back on well a I don't think everyone would agree that claude code is the best way to use claude.

I've heard multiple people even in the last few months say that I would I I'm a cursor guy like or I'm a Devon guy like people have their their preferences.

So I don't think it's set in stone.

However, Claude code is a great way to use claude also and there are nice flywheel effects obviously because once you capture the way people are using claude code you also get so much data to then make cla code better over time.

So, I think those are the two main reasons, but at this point of time, maybe this is this is oversimplifying, but I can't think of too many apps that have a very meaty layer on top of the model that's like very impressive yet.

There are somewhat meaty layers and it's getting there.

It's a time thing as well, right?

Most of these companies haven't existed for more than 2 years. So, um I think it gets there, but I don't think we're at a point where, you know, we're like, "Holy [ __ ] that app has so much stuff, interesting things and technology built on top of the model where it becomes so difficult for the model company to go and try to compete." I think tomorrow if Anthropic decided to or OpenAI decided to take on another app given their distribution and their engineering and the fact that these are still not as thick as you'd like them to be technically they could uh whether they should or not is different but they could and and and that's something I I do think about. Thank you for uh engaging in all this like very meaty discussions like >> yeah you don't even work philanthropic so I know we put you on the spot but >> yeah but like this is what I want to get on the podcast because like a lot of people don't get the chance to like talk about this but then this is like a normal SF dinner the last hit on anthropic I'll point out which is more fun which is uh there was a new CTO joining anthropic from pessit >> and uh you know you're like the king king of Indian posting what's the significance of this for you know last time you're on the podcast you talked a lot about like the Indian um the university system and all that and I to see this guy rise up and >> in India largely academics holds the the same sort of prominence as sport would hold in America everyone talks about it is Asian culture everyone talks about it is top of everybody's mind it is something a lot of people want to be good at and it's extremely competitive society with a very large population the way and and and everyone on average people are quite poor so education is seen as the means to social mobility ility by a large amount of people in India. The way it works is similar to countries like China or some other countries where you take a big exam, you get ranked, a million people take the core engineering exam and the top 10,000 get in and the top 200 get into computer science.

That's how hard it is. That's pretty hard.

And those top 10,000 get into IIT.

Everyone's heard of that.

That's like where a lot of the great you know Silicon Valley people from Sundur to many other people come from from IIT and in India often what I have seen and this is something that I'm generally very curious about is like what is the motivation of humans and what is the dictator of outcomes in their life and their career and one thing I've noticed a lot is a there are some societies that are inherently I think less meritocratic where you get so judged for what you have in the past that you're not allowed to prosper later and I think largely Many work environments in India and and and other places in Asia can be like that.

Uh number one, so you're not judged on the merits of your work.

You're judged on the merits of what you've done.

And number two, there's a very strong self-fulfilling prophecy effect of I've seen people who underrate themselves because they think they they couldn't be number one at something.

>> It's like your own mentality.

>> It's your own mental block where like I couldn't get into like I don't know, you know, people in the Bay Area also like this.

Bay Area is kind of like Asia.

um in the Bay Area I know people who grew up who are like I couldn't get into a a good college therefore I am stupid and therefore I should not work that hard right like it's it's inherent that they could be smart they just believe they're not and that also has an effect psychological effect on your long-term prospects you look at a guy like Rahul Patel who's become the CTO of anthropic and he's not from a top university in India some people obviously debate that but in in general I don't think it's it's a really well-known university in in India and uh and he's come to a society that is quite meritocratic and he's sort of worked his way up to a position of such prominence.

I don't know him.

I don't know what everything else he's done but it's testament to the fact that you know I think this is why it resonated with so many people is even though you didn't have the opportunities early and even though you might not believe you could do it if you work hard enough in certain environments for a long time on things you care about anything can happen.

And I think that's why I wanted to share it.

I thought it was >> and you choose to work at Stripe and Glean and you know do well.

I think choosing the right company is is also a very like okay if you're not going to do the the credentials path you have to be lucky and selective and working at good places and a lot of people make that mistake and and I I definitely did.

I had good credentials and I worked at bad places and that yeah it's it's very interesting that that that kind of >> you work at a pretty good place right now.

>> Yeah, but I I took I took a long time to get there.

I mean just a you know this is funny I have this like automated pockets research and when it send me the email about you and it's like you know DD has a strong presence in AI and immigration were the the top two topics that it talked about. Yeah.

Let's talk about the anttology fund. Um so it's a $und00 million fund in close partnership with Enthropic.

Like talk a bit about that.

I think people are really curious about how close that actually is.

>> Yeah. So you know the anthology fund we set up when we invested in Anthropic around the beginning of last year and the sort of idea was okay anthropic again it's so hard to think about anthropic was a very different company back then it was a much smaller company and they were like look we there's incentive for us to run our own fund openai runs their own fund and there's a developer ecosystem that we want to create around this it's really nice to have great startups that are using enthropic close to enthropic building around enthropic and we said okay but we had a discussion about do you want to have it inside anthropic or do you want to have it outside anthropic because inside anthropic would mean something would mean a corporate venture fund you'd have to hire for that you have to have a whole role and typically if you look at corporate venture funds in history obviously besides open as a notable exception they tend to not be very good because all they prioritize is who uses my stuff the most and uh and that's not a good way to invest in companies so we thought this would be better and the incentives and corporate venture funds are a little bit not misaligned.

So we we did that and now we look back at this fund obviously anthropics in a very different place.

We've we've funded about 40 companies.

The rate it's kind of a hard thing to calculate but the rate at which companies graduate from when we invested in them to the next round is significantly higher on anthology fund companies and we write both small and and lead checks. I mean the thing the two several notable companies from the anthology program have been open router good fire there's a company called prime intellect whisperflow so there's a quite a handful of of pretty interesting things here and um yeah I think what the other really nice thing about it is it really allows us to move fast on on companies that you know where we may not feel immediately comfortable or ready to write like the full check. So we can like participate in a round and then get closer and hopefully go and build a relationship and and and lead that in the future lead that the next round in the company in the future and also lets them get really close to the anthropic ecosystem.

So we have all these events with like the founders and all execs and things like that and people really enjoy like getting it from from hearing it from the horse's mouth. Now I think you know I would say like anthropic is in such a different place. it's no longer a unknown entity.

So, um the program is is gets a lot of demand, but you know, people kind of know what they need to know.

And so, we're still working on like how do we make this program more useful and more beneficial for founders and anthropic alike.

>> Yeah. Well, so you know, I congrats on all this.

I think is pretty successful.

One thing I'm one reason I'm trying to highlight this for for Lyn space is also like how does AI change venture, right?

And something that's something that Allesio was exploring as well and that's why like I don't really know how to categorize anology funds because it looks like a kind of like what conviction is doing what uh but what YC is doing maybe but like later stage right like um some of these already have their C's some of these already have their Abacus is in there is that is that is that our ab no that's a different abacus >> but um what's the model like what what is what are the predecessors that you draw inspiration from for for like setting up this fund or do you just It's like it's like a corporate venture fund managed by Menllo somewhat funded by Anthropic.

>> I would say like you can think of the companies that go into Anthology in three categories.

One is strategically important to Anthropic and those could typically be somewhat later round, somewhat bigger companies.

Two are companies that are using Claude heavily and are just great companies to be to be in.

And three is just very very early stage founders with that are very high potential that may potentially be using cloud models and anthropic and and so on.

We don't require people to use a certain model or the other. So we we keep it pretty open and we do everything from uh like a 100k check to a $20 million check.

So like I think the it's it's really broad in terms of what we can do and we wanted to intentionally keep it that way. When it comes to where we draw like there's some old old examples but I don't think it's really relevant.

There was a fund called the iPhone that that client did with Apple way back in the day which kind of similar.

>> How did that turn out?

>> Um I don't remember. I I I don't actually have enough data on that but that's one example.

>> You know the answer.

>> Um no I I I'm sure there are some great companies that came out of it.

I just don't know who like the details about who what was in it.

>> So yeah, I mean I think so that that's kind of how it's been for us and I think it's been a really great program and we've had >> I mean we were excited about the companies that we could lead the rounds in as well.

>> Yeah. I wanted to get quick hits for people who maybe never heard of Goodfire and like I I know I know them because I've been I've invited um Mark uh to to my conference and I've been to a bunch of their events. Actually, I'll just give you I'll just give you that list, right?

Good fire and prime intellect are in your research category, right?

There's others with like diffusion based language generation, novel architecture.

Uh it's all over the place.

Research is like the most wild west of this.

How do you view like sort of research investing?

>> I can talk about any of those companies for briefly as well. But the way I view research investing is it is extremely hard to pull off, but when you pull it off, the results could be very remarkable.

One of the hard parts is the tension between do you keep investing in research hoping for something that yields a better result that leads to a better product or do you try to monetize and scale what you have already that's tough it's a it's a really tough thing to do it's a really tough decision to make when you're you know working with those founders you're on that board it's like somewhat anxietyinducing when you're thinking about this even from an investor standpoint like do I just get to like a couple million AR do I like start doing something or do I like keep the research bet wrong. The way I think about research investing overall and is honestly follow where the talented people have the most competence and then have an idea around how this could be useful in what I call a top down way.

It's not really top down, but the way I frame it is if if I fast forward 10 years from the future, what do I think is very likely to exist and what are the ways I can get there? If I do believe strongly that there's something like that and I believe there's a team very strongly headed towards that direction, I can sort of draw a dotted line and go like okay maybe we can see something here.

So that's how I broadly think about it.

>> So concrete example, Goodfire is like the most the most uh interesting one.

Mechanistic interpretability.

I didn't even think that was a market that was worth investing in, but obviously Anthropic does.

>> Uh and uh they seem like they have good vibes.

What's the I guess the the summary of your of like your take on the company?

The way I think about the company is right now almost all frontier and some many non-frontier AI models are complete black boxes.

We don't understand why they produce the outputs they produce.

All of the eval and studies on them are empirical studies not intrinsic to the model. So it's like hey here's the outputs we saw and therefore this is the benchmark score or this is how we think it did.

If we believe as a society that 5 and 10 years later in the future, these models are going to be critically important for making pretty heavy decisions whether it's you know I call it anything from whether somebody should get a loan or insurance or a legal decision then I don't think that the blackbox appro approach is long-term scalable.

It's just not how society can function where it's like you say you throw your hands up and say well this is what the model said and then I asked it explain yourself and it said this other stuff great like that's kind of what we have today that's the best >> thing that we have mechanistic interpretability is really going into the weights of the model and trying to figure out why did the model do what it did and one of the more concrete and relatable examples of this that you know guys may be aware of is GPT40 had this uh phase of sycopency that um a lot of users really liked, but it's kind of one of those things that's not as easily detectable in an eval unless you know you're specifically maybe testing for it.

Even then, it's quite hard.

It's very personalized. It's not like any key words might arise obviously, but it is something that is quite easy to tell in even current interpretability methods.

You can tell when a model is being sick.

You can tell when a model is trying to lie.

you can tell when a model is trying to uh steal or persuade you of something.

And so I think the if we further that research direction 2 3 years in the future, we will be able to understand why models say what they'd say.

It's brain surgery for LLMs is my catch catchphrase.

Um but doesn't apply to LLM only all models and that is a pretty important insight into deploying AI at scale.

>> Yeah. And you don't know the business model yet.

don't don't need to as long as we figure out >> there are some ideas we have um but not ready to talk about publicly and some that are working also it's not right to be publicly >> does it feel worthwhile to do this on such small models because I think most of the work is done on the open source releases like how much of a gap is there between what they're able to do and then translate that into doing it for >> for scale like they've shown that even for the biggest open source models you like even for like deepse models you they can do it and and in general like scaling is not the bottleneck.

Obviously access to the weights would be a bottleneck but not >> but they're in the antology fund so they can work with >> entropic um but they don't have cloud access uh cloud weight access so >> uh for listeners who want to hear more about mechan we did a podcast with the mechan team Emanuel from uh anthropic so that's your like 101 there we'll do something with goodfire at some point prime intellect another very hypy company uh you don't have to say it but I I know it's very much in the water that they raised a very large round.

So I ignored distributed AI for a long time.

It's usually crypto people coming over saying like, "Hey, we have these GPUs all over all over the place.

We will somehow ignore the speed of light and like and just like you can use our GPUs to train models." That's why I ignored Prime Intellects. I was wrong.

Tell me why I was wrong.

>> You may not be wrong. I mean, look, like I could be the kind of person who goes to shills all of their companies and says this is the best thing ever.

And if you don't think it's going to be a $10 billion company, you're wrong.

Every company has risks at this stage.

and Prime Intellect has their fair share of risks and whatever went through your mind went through my mind when I was looking at that company. I do strongly believe in like I'm sure you've seen this quote too is in the quote of pessimists are probably right often but they rarely change things and it's an easy thing to say but when you're investing it's something to think about which is there's a lot of things that could be potentially wrong with prime intellect for sure but the thing that I really liked that drew me to them is like if they were right about a couple of things what could go fantastically >> distributed training is one of them access to talent I think is one of the things that I underwrote for them.

The ability to hire fairly great people away from people like other labs is is is really hard and so they I think they can do that.

And the third thing I think is there's a broader vision to prime intellect that is not yet realized yet where the first step of that was distributed compute and uh and we'll see if they realize that.

Um, >> yeah.

>> Well, uh, you know, Will Brown's been on the podcast multiple times, uh, and he's, they've launched kind of like a verifiers SAS platform or something or a marketplace.

I'm not really sure what exactly.

I should probably try it out.

But, uh, it's very interesting.

I >> mean, the other thing I'll just say out there is like like everything in AI changes like every 3 4 weeks. So, I I'd be a fool to say like I could tell like what this company is going to do.

>> Yeah. Well, well, you know, all I'm trying to do is like try to capture for people who are like not in the loop on like, you know, that this these are the companies that people are talking about, right?

Okay. So, so let's let's at least hit on Open Router and maybe one more of your choice that maybe is like less known, but you want people to to know more about it. Open router we have to cover.

Uh, big deal. Obviously, I like I I I do think like this one I was like relatively early on in terms of like I I I saw the I saw the products. I saw what what he was trying to do and I mean it clearly has has done really well.

I did not know he was taking investment or I would have invested.

>> He wasn't.

>> Okay. Say more. Say more.

>> Open router was sort of my like you know like I don't want to make this about me.

It's really about them. But to in my mind it was my my my darling deal.

You're proud of it >> because I'm just like man I entered venture and I'm like that is the company I want to have built.

>> Um and for I think we're skipping a bit.

Let's explain who Alex is, what he did before like >> Right.

So, I'll give you let me give you the background on Open Router. Alex is a phenomenal phenomenal founder.

He started a company called OpenC before which was the NFT company.

Obviously, that at its peak was I think a $14 billion more than $10 billion company.

It did not meet that valuation's expectations.

But look, there's many things out of out of control and in your life.

Then Alex started this company called Open Router. And what gravitated me towards it initially was two things.

One, it was very clear from my time at Glean that this is a perfect problem where engineers all think it's easy until it becomes so annoying to keep maintaining this.

That's the sweet spot cuz no other person, no other company will gravitate towards it, yet it is so it is kind of thorny to be able to maintain a portal that accesses a bunch of models.

The nuances are quite tricky and annoying and boring. So that's one thing I liked. Second thing I liked is I was pretty convinced that if there was a market for anything like this, it would have to be a PLG motion. I think go so far as to say for in any SAS market, if there can be a PLG motion, the PLG motion will win. What I mean by that for like if you're not people are not familiar with venture words like PLG is all users have to be able to access and self-s serve the product and try it in order for that to be >> without talking to anyone >> without talking to somebody.

like the classic like get on the phone uh on a SAS website.

So those two things really drew me to the business. And then of course third one is just quality.

Like there's these small details that open router just like beautiful website, beautiful landing page. It's not some like SAS trash of like here's what we do and product solutions about us.

Like I am so sick of that. You land on the page, it's a developer page.

It's like here's how many people are using what models.

Love it. I'm like, "This guy knows what his users really want.

" And all of those were compelling. I went out to New York to talk to Alex.

He ignored me a bunch of times forever. I read him what I call love letters.

I'm like, "Hey, man. Love it, dude. Like, it's so cool.

I don't even want to invest.

Just talk to me. Like, I don't really care.

I just want to meet you. I have so many ideas and interesting things." And uh it was one of those companies where I genuinely felt that way. So, when I did meet him, um we, you know, started jamming on things. and I don't know the VC motions of how to sell. So I wasn't really even trying to do that.

But when I told him like look if you are ever going to raise I will make it happen.

I just love everything about this.

So that's how we ended up doing the round.

I think the company is interesting from a business model perspective. I get this question a lot. How does this business model scale?

And I think right now the business is doing fairly well.

>> Volume takes like 5% of uh everything.

There's that business model, but then there is a a reasonable threat vector where you know what if the spend on the net goes down over time as tokens go up.

So you do take you do carry some risk of the prices of LLM falling to a point where the business does stops working and I know many other companies take that risk as well. Um so that's one risk of the business on just pure consumer spend.

Second risk would be you know keeping people on a like a lot of hobbyists use open router and they tend to churn and then a lot of enterprises will use open router to evaluate and then go pick a model that they want to settle with later. So that's a problem to fix and so those are two of the risks but overall I think they've just like been executing phenomenally.

>> Yeah.

>> How do you think about the verscell AI gateway for example? I think that's been I mean I'm a fan of open also do it versel.

>> Yeah, I'm interested where you already have like I use next.js, right?

And it's like well I just use AI SDK.

AI SDKs comes with AI gateway. It's like kind of makes sense to do it. How do you think about this market and like how tied you need to be to like the actual application development versus you're just kind of like this Switzerland, hey, we don't have, you know, open router doesn't have a developer framework for example.

You know, if we're in a partners meeting, that's maybe what I what I would ask. Like my simple answer is I don't think the AI gateways of other products are ever going to be their first priority. And uh the other simple answer is I think open router has this mind share and momentum that just doesn't go away overnight. So it would be similar to asking like hey I'm open AI in 2020. What if somebody else does this?

Yeah I mean they could or 2022 like they could but like we are so far ahead in some ways already. I think the last thing is uh I think that they have built a lot of smaller things that are nonobviously useful that other people probably won't sweat the details to go out and build. And so when I say that I I'm like it's everything from like here's something that nobody ever like even cares about about Open Router, but they have a feature flag where you can only want to go to certain LLMs that do not retain your data. they go to that level of granularity of thinking about what is what do the users actually want and that's one example another example is their detail on the provider level almost nobody has provider insights there was a very interesting side study of how Kim K2 did this whole study of different >> the verifiers >> the verifiers okay >> but I think that's interesting like the fact that people don't really acknowledge this but the same open- source model or the same source model can be served by different providers and have different context text windows, different quality, different latency, different throughput.

Where would you go to see all that information?

Well, you see it on open router and uh and there's some like elements of scale where there's enough people using the different providers you get that data.

So, all of those things I think are somewhat defensible on open router and uh hopefully more over time.

>> Yeah. And I think their leaderboard charts are like one of the best growth hacks because >> very good graph graphics.

Especially people that are into open source AI are always posting these things saying, "Hey, open source is up.

We're we're back.

" >> And one one thing I used to joke about is Open Router is the only non Elon company that Elon has tweeted the most about for obvious reasons.

>> Code has one number one right now.

I'm sure that's M code free.

>> Like a good week where I was like every day it's like open router, open router over.

I'm like yeah.

>> Yeah. And so so for those who don't know because that's because Grock Code Fast is like a top model.

>> Yeah. Cuz it's free. Yeah. Yeah.

Cuz it's free. Yeah. Yeah. Yeah.

There's a lot of gaming, right, of this of this stuff where it's like, "Oh, we'll give it to you for free, but then we'll we'll say we're very popular." I'm like, "Yeah, you're free because you're popular right?

" >> Yeah.

The other way around.

>> Okay. Very cool. Um and okay, so there there's there's a bunch of others.

We're not going to go through all 40.

What comes to mind? What what what do you want to talk about? What do you think maybe is a a very interesting company in your portfolio that like more people should know about?

>> I'll talk about um Whisper and Inception are the two I want to talk about.

>> Inception. Inception is not even here.

>> That's why I was so so >> we can we can say we can talk about the company without saying the name.

>> Yeah. Okay. Let's let's try that.

Let me let me try that and then >> but I mean also like Inception is like if I Google Inception, it's not like I'm finding it.

Anyway, >> let's talk about these two things.

So, Whisper, I can talk about first.

That's a clear one. So, Whisper is a company that does, you know, a very in many people's eyes something very commodity, which is voice dictation on your uh phone and laptop. The things that I really liked and that stood out to us about Whisper was um in that quote unquote commodity market, they are in my mind like the the fastest and best and most uh delightful product that kind of in many ways set the frontier of the nuances of how to make this easy.

Press your function key on your Mac, talk to it.

It's always on. It has fantastic accuracy as you're dictating.

If you ever stutter and go like, "Oh, no.

I didn't mean that. that I actually meant this and knows what you meant and it goes and corrects it. I find that they have this metric they use called zero edit rate inside uh which is you know >> amount of times you don't need to edit >> correct and uh their zero edit rate I think is north of 80% which is insane for a voice dictation product so I you know many other risks of that business too but one thing I I think I love is users love it users stay on the retention is great and uh it might make voice suddenly work because if you think about computing people type slower than they talk and so it could like it is unlocking this new faster way that people feel comfortable talking to their computers that really didn't happen in voice dictation before and it's not just a whisper model which is a common question I get so >> yeah for for people who don't know it's Wispr yes >> uh which you know you got to spell it somehow I mean the question here is always like it's the same thing right like voice is very commodity uh I actually happen to use super whisper right um mostly influenced by Jeremy actually and then granola's is very popular.

Notion has like this notion speech thing like how what what's the what's the plan?

>> This is every is this is why I'm not an investor.

How do you survive?

Basically >> trying to reason about why you should be the winner.

>> Even GBT desktop has like the you know has some shortcuts for stuff.

I don't know if it like does exactly the same thing but like you know it's not that far away.

Anyway, you're excited about it.

I do see a lot of tweets about Whisper and it's one of those things where like yeah the PG is getting me man like I I like I I'm like should I switch?

I don't know. Like my thing's fine but like what it feels better on the other side. I don't know.

>> Well, we'll see. We'll see how that plans out.

There's some interesting plans to get it to be a a cooler product but we'll see. The other company and again >> okay we'll call this Stealth Co.

>> Stealth Co. One thing I find very interesting about Stealth Co.

is comes in the purview of research.

We talk about different architectures all the time.

One of the most compelling alternate architectures for AI is diffusion models.

So one thing that I think is really interesting about it is like you you talk a lot Sean about like the the parto frontier of of latency cost quality.

Diffusion models today are I would say 80 to 90% of the quality at onetenth the cost and latency.

So has huge implications on obviously the stock market which is kind of Nvidia um and and many other things. But also like there is clear examples that you can show of use cases where that might be very valuable because there are many applications that work in volume that do not require high quality but definitely require better latency and everyone could use some cheaper models.

So you know there are I think there's an interesting area of research there.

Maybe it gets to Frontier, maybe it doesn't.

The one thing I want to draw attention to with diffusion that I think is particularly interesting is left to right reasoning for code doesn't actually really make sense because in code we don't like we might sometimes write code left to right but after you write code you go up and down and figure out hey is this variable set did I do this there are many birectional dependencies in code so it there's a natural tendency to lend itself to diffusion models where you can imagine like as you are denoising you fix partial issues in different parts of the code at once versus this reasoning paradigm where you kind of have to figure everything out and then go give your final answer.

>> Yeah. Yeah. I like that a lot.

Uh especially for like uh syntax structures like C like languages where you need to open and close a bracket and and all that and hold that state. I think like it's I the question is always the sort of quote unquote the hardware lottery of transformers.

like transformers is all you need and like uh diffusion is kind of like a different branch off of that tree of research. They are related but we might be too far gone down the transformers tech tree to come back and then go down diffusion like being the point where like they might never be frontier because we've just had like four more years extra of like transformers LLM research.

>> Yeah, it's true. I I think about this all the time like like thinking about in the course of history what are the significant moments where if only something forked off a different way that maybe there would be a completely different paradigm of outcome.

>> Yeah. And usually the worst tech worse tech wins like Blu-ray DVD uh HD DVD or something like that. I think there's like a lot of variations of this even like I think there was a discussion about AC versus DC currents like back in like Edison's days like what there was this like big fight uh between between Tesla and and Edison. Um I don't know if you >> I mean I'm I'm aware of the very very basic details but but like it's so interesting right because like just you take something like this and then the question becomes like okay do we bet on it or is the timing just off because something took off and no and we can't pull this like rocket ship back to Earth and so we've lost that fight.

I don't know. I'm not a purist scientist anymore where I believe like the best ideas and things win.

I think in markets it's very obvious that that's not true. Um I think a lot of things go into winning and uh sometimes it's out of your control.

>> Yeah. Yeah. It's it's very true like uh and you know speaking of enthropic and like things that happened this year.

MCP happened this year and I was when MCP came out I was I was sleeping and then when when they came and did the the workshop with me and and I think I you see a lot more noise and I was like okay there's something to this and and like now it's like basically kind of def facto one as the interop layer for all the labs and all all the all the models and there's no reason why this could have won versus anything else apart from like it was well speced out it was backed by anthropic it's it's kind of a similar thing like I don't know if it's like the But like it was good enough.

>> Yeah, it happens. It happens so often.

It kind of makes it tricky to not even just investing but in general to think about ideas.

We see this with startups as well.

It's it's very heartbreaking.

Every every once in a while you you'll meet a founder where I'm like your idea is fantastic.

Your execution is great.

>> I just don't see >> it work because the market dynamics are not in your favor. And maybe I'm wrong about some of them, but you know, >> when you say market dynamics, is it TAM or something else? No, it's sometimes it's like I don't see that like >> you are a small group of people trying to wedge something into a market.

We know how long that takes and we know the other forces at play and if I don't like I just don't see imagine a single person running in a tunnel with a light at the end with the tunnels closing in on you.

You could be the fastest runner in the world and you might not make it out of the tunnel.

That's kind of the analogy.

>> And uh and so you might be doing everything right.

It's just that that window is not there or at least I might not think that window is there. Um I do think a lot of companies fall into this bucket of ideas and so >> to me in a way I almost think of companies like Mosaic ML in a way which is like hey we got this amazing team we can help you find two models and yeah but nobody you know the market dynamic just there's really nobody fine-tuning models and part of it is like the open models are not that good and part of it is like people don't really have good data they don't have the expertise and again if you go back now now there's like you know RL environments and like RFTs like the next wave of that and it's like maybe they'll be able to get in the window but it's just interesting how you know now >> but then the other flip side of that is and yet they get acquired for this amazing >> but yeah because the market is just so big I mean even if you think about something like yeah diffusion models for text right it's like you know it's a it's like if you sell it for a billion dollars right it's like 0.01% 01% of like Nvidia's market cap and so it's like okay well the amount of money being spent in the space is large enough to justify betting yeah >> like the same way Instagram was like 1% of Facebook market cap it's like this is similar where it's like man >> if data bricks is rich enough thing >> exactly it's like you know >> they really want you to know that they're an AI company >> exactly and now they're worth 100 billion I mean you know like without mosa exactly it's like without mosamel maybe they're not on the same trajectory it's like I don't know maybe they are because you know Aldi is great and all.

>> You guys have ever talked about like the rollup companies which is my favorite like little >> the PE rollups.

>> Yeah. Yeah.

Well, >> I didn't know that was a topic of yours.

>> It's not really a topic of mine.

I just find it quite interesting to see how speaking of AI companies and markups.

It's there are companies obviously not going to name them but there are companies who go like hey here's like a small company that does a million of ARR completely with humans. I'll buy it for 2 million and then I'll do some of it with AI, but now I'm an AI company and a million of ARR and AI company world is a hund00 million valuation. And so, you know, it's it's pure like multiple arbitrage on the category that you're in.

>> Um, so, >> but like yes, that's the like cynically haha, but then like what if it actually works?

>> Because like the hard part is getting the customers.

The hard part is like getting the domain expertise. You drop a bunch of software engineers in there and like you know automate it, make it scalable, make it cheaper and like Yeah, maybe it works.

>> No, you're right. You're right.

>> I think it just funed a company that bought a t a tax firm.

So >> yeah, a law firm accounting firm >> a law firm. Law firm. Yeah.

>> If it works, it works. I just think what was interesting to me is like you can 50x the value of the company before you actually landed anything with AI yet.

>> Yes. But then then you use that funding and the equity to like hire the people and it's weird. So there's this concept I always talk about uh which I'm surprised people don't really understand.

It's reflexivity.

The belief that something can be true can make it true even though it's not true at the time that you believed it.

>> Yeah. That's venture capital.

>> Yeah.

>> Just give money and everybody's like, "Oh, they raised 300 million.

It's a great company. I love that company.

" It's like, "Yeah, I'm an investor in it, so I love it, too." And it's like all the employees are like, "I love this company.

My stock is worth a lot of money.

" There's also that effect that's very clearly in venture capital where not just what you said which I agree also happens but imagine there's there's times where people funnel so much money into a company before it's really like prime time which dissuades anybody else from entering that market and then they become the de facto winner of the market cuz they cancel the competition with funding.

Um and you can think and I'm not going to name the categories but you can think of innumerable categories in this market in this paradigm that's already happened.

>> Yeah. And I feel like even in AI it's like maybe two and a half years ago when Chad GPT came out like this is cool but like you know a lot of enterprises were like maybe skeptical of like is this trend going to continue? But then once you start seeing tens of billions of dollars being put in open and entropic and it's like it's got to work >> especially you can deploy it in hardware >> which uh you know I think like you're at that point you're building infrastructure and if infrastructure very capital intensive and like you you actually can do the math it's not it's not humans anymore it's like machines and land.

>> Yeah. Exactly.

>> Power like like Amazon is building all these like training chips and like all this infrastructure for entropic.

It's like do you really think they're dumb?

Like you know what I I think at some point it's like same with Stargate.

It's like do you think all these people are dumb and like you're saying the models are not that good. It's like you know uh the podcast we released today with Kyle like he was still kind of skeptical that they had 500 billion for Stargate.

And I'm like, not only do they have the 500 billion, they have the next like trillion like lined up mostly cuz like the the projections I think like I I I've been talking about this a lot and I'm very out of my depth cuz I'm not Dylan Patel, but like I think it's the most big it's probably the biggest story of the year like beyond the models like just the infra build like um you know like and I think like people don't understand like the the the road map is very very strong for the like the rest of this decade at least for opening had to go from like 2 GB of of of compute this year to 30 with everything they've already announced and then there's a plan for the next 125 like the United States uses 300. It's like crazy ambitious.

Do you think like I guess it's a question for you guys also because I I don't have a good answer yet is the the belief is always obviously bitter lesson build right like you buy more compute therefore you get the both models >> and is it by the way is it anthropic relevant thing >> right >> and so but like is I guess is that necessarily true like there could also be a world where that's just not not true so you know you are kind of >> this is what makes it bitter it's like what if it doesn't apply to me this time Right.

Right. And I think, you know, being in Sam Alman's place, that's absolutely the right chess move to play.

But, you know, I do wonder what happens if like all this investment in compute doesn't actually lead to economic gain/ better models/ everything else.

>> But I feel like we've reached a point where like the models are good enough that even if the next generation is not 10x better, we'll be able to use the compute.

I I mean, and again, a data center is like, you know, >> that's the cope. they're writing it down for like 30 years. So it's like you know can you run GB5 pro over the next 10 15 years >> given the amount they're spending on compute and this is a general question I'm not criticizing at all is even if everyone was using claw like whatever oh codeex cloud code whatever all the time like inference demand is not that big >> globally >> right so you would have to believe >> so what would you have to believe for that to be true because there are 800 million weekly active users >> this is what Greg Roman says like a GPU for every human on I I'm I'm somewhat [ __ ] posting.

I'm somewhat ship hosting, but they actually say this on their official coms, so I'm just repeating him.

>> I I I don't I don't necessarily disagree.

I'm just trying to work backwards to like what do we need to believe to get there cuz chat GPT compute is not that much >> correct?

>> Right. So they're not doing it like aentic stuff.

Maybe they will be in the future.

Most people are doing basic Q&A type queries.

>> Uh by the way, I put it up on chat.

So, so uh if people watching on YouTube, they can see this which is uh this year Open spent $7 billion on compute only two of that was for all of uh their inference, >> right?

>> The remaining five was R&D.

>> So all of Chad GBT all 800 million users all of Sora all of like all all the all the sort of like uh API volume two billion and they have two and a half times that for R&D, >> right?

And so my my my my point being like yeah if inference is one thing I don't know how that will scale to that volume but then you'd have to believe that the rest of it goes into R&D and therefore produces models that are so much better that therefore have more demand etc. But if in any case that like I don't know the incremental marginal is not that big then you know that's the risk of of of the of the >> yeah so all you like to disrupt open AI you need to have more efficient research because right now it's pretty inefficient you know spend five to get two >> so you know so like what openi did to Google is what the next openi has to do to open >> you know what I mean like Google was spending a lot of money Facebook was spending a lot of money and like they didn't come with anything open did and it was like a small tiny little uh you know startup and you know they they had you know GPTs and I like Brad Radford but like someone else will they may or may not come up with that.

It's like that classic quote, your margin is my opportunity like Google was milking those margins and they didn't want to spend the the compute for every search query and so >> now open is willing to >> so we've covered a lot of topics I think this thanks for indulging like I think this is like for me it's like a survey episode of like here's everything we're also catching up with the former guest it's always nice maybe we can end it on this like coding interview thing uh which which literally you tweeted about today what is the situation that you I guess engineers should be aware of.

And I think this like maybe ties into LLM psychosis a little bit, >> you know, like like so I tweeted I'll just cover the tweet first. So I tweeted about this u guy who wrote a blog post about he was in an interview from a I I didn't think it was a legit account.

He thought it was a legit LinkedIn message where he was interviewing for the company.

They sent him a coding interview.

They said clone this repo, run this code, make this edit.

Kind of not untraditional. So it's pretty pretty run-of-the-mill type interview.

It happens. And in that interview, he claims that he went to cursor and asked whether the code had anything any vulnerabilities or anything he should be aware of.

And it revealed that it had some link.

It had a bite array that compiled into a link that would go and take a bunch of private information from you.

So that was the TLDDR and uh and I tweeted about that saying you know like the the world interesting enough it was solved by vibe coding but it could very easily the world of vibe coders who don't really look at code I imagine are more susceptible to being in attacks like this and in the future and uh and it got me thinking about a lot of things like what is what do attack vectors even look like if people aren't looking at code there's so much that can go wrong and what are the implications on model safety and how models behave in those environments.

So, that's one, but I think the broader thing and and I'm curious what you guys think about this is what I've been noticing more and more is I was having this conversation yesterday with some of my close friends where, you know, some of the joy of coding used to really be you're stuck on this annoyingly hard problem and you just bang your head against a wall and you want to kill yourself and then eventually you're like, I've figured it out and then you solve it.

And that's that's the muscle that that you build when you improve and get better.

And now I find myself even doing this.

It's so hard to do if you just have a constant slot machine that might give you the right answer.

And who knows if it will, who knows if it doesn't, but you just pull it all day long. Please fix, please fix, please fix. And uh and what does that mean for the craft of engineering or software engineering in the future?

I I don't know. Like this vibe coding stuff.

I mean, great for the rest of the world that was not an engineer, but I'm now seeing how it's affecting the the trained software engineers and it's kind of like a drug for them and it stops them from like living their own life >> which is doing the engineering because >> it turns your brain off.

>> Because turns your brain off.

>> Yeah. I think you know self-driving cars, people thought about this first.

This is why when you drive your Tesla, you have to like keep your eyes on the road.

uh because they don't want you to turn your brain off and we don't have that equivalent in in uh developer environments yet.

Maybe we should like watch your watch your eyes.

>> We removed one one word in the code.

Which one was it? Write it back.

>> So my answer I mean I happen to have shipped a model today or two models.

Um and uh part of that is actually what I've been calling the semi- async value death.

Um and a lot of it I think as is my reflection on coding agents in terms of like uh we started with co-pilot which is tab autocomplete and then when we went all the way to clock code which is like very async very you know like uh just it could take 30 minutes could take 30 hours I don't know it just just it it just runs and I think like something that cognition is very interested about is fast agents or something I've been writing about more is fast agents is where like under a certain level you actually want to just be in a mind meld with the human and AI uh to have like fast responses so that you can get helpful assistance if it helps you can get out of the way if it doesn't help and um it like that is actually where you do your hardest problems and then the async agent is where you do the commoditized dumb boring labor stuff that you know how to do you just don't need to do it but when you are actually very deep work and focused and you're working on a hard problem you are you're like you should be applying your human intelligence augmented by AI in an un unintrusive fashion which I think is the way that obviously I think it's like it's a prohuman message but it's also like a really interesting area of research for us >> but that's almost like to play devil's advocate there that's like telling somebody well I'm going to put the cigarettes right here I know you love smoking but please don't do it >> it's not a cigarette >> it's right here >> it kind of is in there there's an analogy right to be made here it's it it's a cigarette for your brain because you do not think anymore when you pull that button and and over time I feel like you know the brain will get weaker if you don't use it for that task.

And and and I like your your your message.

I mean, I would ideally like if I was had a team of engineers, I would also tell them the same thing. But I mean, I I I worry about the reality, which is that's not what they do >> in many cases.

>> But but I mean, you got to ship the thing, right?

Like I I agree, but at some point you got to close the ticket and merge a PR.

>> Mhm.

>> So how are you going to get that code done, right?

It's like they are doing it or they're gonna get fired if they're just like generating one way or the other one SAS.

>> Yeah, it's interesting. Okay, so um maybe I'll put it this way and I'll see I want to see how you respond.

Okay, so um we have the formula the the fundamental formula for coding agent performance.

Okay, it it basically is find the right files and then write the right files.

That's it. Like so read and write like read the right files and write the right files. That's it, right?

So actually what fast agents can do or like what uh you know what what I just uh did today was basically the equivalent of a heads up display like give you more info but you take you still take all the actions. So we help you read uh read faster, read more efficiently, read uh with more focus uh but you still write.

>> And so I think like that's still that's not a cigarette so much as like we try to be helpful and we're we're evaluated on the helpfulness of the the reading and the comprehension so that you can hold everything in your head.

>> That would be the pitch.

>> It's it's true. I I I think there if I don't know how the product looks.

I would love to eventually play play with it with the sweet gp and all of that stuff and but um there's a world where I think the the product decision also get goes a long way into how people use it.

So if it is like that then then maybe and I and I think when people use even for example if someone uses a cursor a lot of people like the fact that they can see the code and then they kind of have to hit the final accept. Yeah.

Um, so >> human in the loop.

>> Human in the loop. But you know, I still I worry I still worry. But um and I worry the most about like the younger kids, right?

Like the you you think about the people growing up in college, >> how would you ever get yourself to think if you just had this like clearly more intelligent thing than you? At least for like I don't want to like rate myself too highly, but if I'm working in a domain that I understand, I can at least tell yeah model you're you're doing the wrong stuff.

Like don't definitely don't do that.

that's don't write that at all.

That's a terrible file. Why are you creating four files for this? But if you think about what it looks like to a 18-year-old CS major freshman, they're just probably like, I guess that's how you do things. Um, and like they can't hold it at that. So, when they like their their training is just a little bit different.

>> Cool.

>> Yeah.

>> Hi, D. Thanks for indulging and welcome back and thanks for coming back.

>> Thank you guys. Always fun chatting with you guys.

Loading...

Loading video analysis...