Benedict Evans: OpenAI’s Moat Problem & the Future of Software
By The MAD Podcast with Matt Turck
Summary
## Key takeaways - **No Winner-Takes-All in Foundation Models**: There is no winner takes all effect or network effect in a foundation model, unlike software like Windows, Google, or iOS which produce monopolies through network effects. Others can always make a model as good as yours, with 3-6 organizations leapfrogging each other. [01:51], [02:04] - **OpenAI's Mile-Wide Usage**: OpenAI has 900 million weekly active users but most are not using it every day and can't think of anything to do with it, with only 5% paying. The usage is like a mile wide but an inch deep. [07:46], [07:49] - **Better Models Don't Solve Problems**: A better model doesn't solve the problem because improvements are marginal; if it got 10 out of 50 things wrong last month, now it gets 8 wrong, so you still check all 50. It's either right or not right. [10:23], [10:31] - **Memory Fails as Defensibility**: Memory only works if you're using it a lot already, but 80% of people hit return less than 1000 times last year, averaging less than 3 times a month. You can just ask it to dump everything and paste into another model. [11:45], [12:08] - **AI Enables Way More Software**: AI coding means coding is way cheaper and easier and enables stuff that couldn't be done in software before, so there will be way more software, expanding improvised software use cases beyond Excel or Sheets. [27:47], [27:58] - **Swap Mindshare for Assets**: OpenAI has commodity technology, no infrastructure or differentiation, but massive mindshare, so they try to swap it for hard assets and talk their way into a self-fulfilling prophecy against competitors with legacy cash flows. [00:01], [39:13]
Topics Covered
- No Winner-Takes-All in Foundation Models
- OpenAI's Mindshare Swapped for Assets
- Better Models Don't Fix Jagged Capabilities
- AI Expands Improvised Software Category
- AI Bubble Leverage Ends Abruptly
Full Transcript
If you're Sam Alman, you've got a commodity technology. You've got you're
commodity technology. You've got you're competing with people who have giant legacy cash flows. You don't have your own infrastructure, don't really have any differentiation, but you've got massive mind share. So, what do you do?
You try and swap that for hard assets and you try and talk your way into a self-fulfilling prophecy. And then
self-fulfilling prophecy. And then you've got this middle case where you could almost kind of call it like improvised software. And now AI coding
improvised software. And now AI coding means coding is way cheaper and easier and also means that there's a whole bunch of stuff that you couldn't do in software that now you can. And so there will be way more software.
>> Hi, I'm Matt from First Mark. Welcome to
the Matt podcast. Today my guest is Benedict Evans. Ben is one of the most
Benedict Evans. Ben is one of the most thoughtful and influential tech analysts in the world. He's also one of my favorite guests making his third appearance on the show today. In this
conversation, we cover the great AI unbundling, whether software is really dead, why OpenAI might be the next Netscape, and what it all means for AI builders, investors, and executives.
Please enjoy this fantastic conversation with Ben Evans.
Benig, welcome or I should say welcome back. Thanks for being back on the Matt
back. Thanks for being back on the Matt podcast.
>> Thank you. So I thought a fun place to start would be what's going on with OpenAI. So, as we recording this just
OpenAI. So, as we recording this just last night, there was a Wall Street Journal article that said that the
company is planning a major refocus on coding and business users with their CEO of application Fiji Simo reportedly
saying that the company should stop side quests. You wrote recently about OpenAI.
quests. You wrote recently about OpenAI.
How did we get here and what are the fundamental challenges that OpenAI faces?
>> I suppose there's two way you ways we could talk about this. One of them is to talk about all the stuff that OpenAI has been trying to do. I think the other is to talk about the problem. And the
problem is that as as far as we can see, there is no winner takes all effect or network effect in a foundation model. So
there's nothing that you can do that means other people can't make a model as good as yours, which is what we're used to from the software industry. That's
how Windows and Google and Facebook and Instagram and Tik Tok and iOS. What
we're kind of used to in tech is that software by definition has no capital or very little capital but it has network effects and so it tends to produce
monopolies or near monopolies and that produces high margins and so that what happened with Windows and Mac OS and Google and so iOS not Mac OS and Google and so on. Well, the question with LLMs
is like they're very expensive and very hard and so you can kind of fail to get onto the ladder or fall off the ladder for like Microsoft on the one hand and Meta for for the meantime on the other
hand. But there's nothing that there's
hand. But there's nothing that there's no lever that you can decide you're going to pull whereby you'll just pull ahead of everybody else and they won't be able to catch up which is like Google versus Bing. Doesn't matter how much
versus Bing. Doesn't matter how much money and how hard Microsoft works, Bing will never catch up with Google. So that
means if we've got like pick a number between three and six or maybe more organizations that can make a frontier model and they keep leaprogging each other every couple of weeks or every couple of months. So that's one problem.
Just to build on that problem, you mentioned software. Is it such a bad
mentioned software. Is it such a bad thing? If you think of not Windows, but
thing? If you think of not Windows, but if you think of the oligopoly of AWS, Azure and GCP, those are largely
undifferiated businesses that in because of the market size seem to be doing quite well. First of all, I push back
quite well. First of all, I push back slightly on that comparison and then then answer the question. And the first is if you actually look at the market shares, Google cloud, Azure and and AWS are actually in quite different businesses. AWS is mostly
businesses. AWS is mostly infrastructure. Microsoft is mostly
infrastructure. Microsoft is mostly services. Google is mostly scrambling to
services. Google is mostly scrambling to catch up in in a very distant certain place despite having basically invented cloud. So they're kind of not all the
cloud. So they're kind of not all the same, I think. But the the challenge I suppose is is where do we get to equilibrium on the foundation model itself? Is it that these things will be
itself? Is it that these things will be able to create ecosystems above themselves? So it will be like iOS and
themselves? So it will be like iOS and Android and so they will create value they will create value capture they'll build tools and all sorts of stuff on
top of the raw foundation model and so you'll have to choose right all our company is going to standardize on chat GPT which isn't what happened in cloud you as a business if you go and buy a
SAS app you don't like think well we're going to log into it with our AWS account certainly as a consumer like I don't can't remember which cloud snap uses and I don't care and if I you I
install Uber like which cloud does Uber use? Who cares? So it may be that the
use? Who cares? So it may be that the models are able to kind of build something that looks more kind of like Windows which is what Sam Waltman talked about a lot at the end of last year or it may be that they're basically
commodity infrastructure and they're sold at marginal cost and maybe you barely make a return on the investment or maybe you don't make a return on the investment itself and what you're doing
is making a return on everything you built that you do with this on top of it. And that's where the the the these
it. And that's where the the the these companies do get different because if you are Meta or Google, you've got this whole other highly profitable business which now needs to have LLMs inside it
powering all sorts of capabilities and features. And you probably want them to
features. And you probably want them to be your LLMs rather than somebody else's. But you not may not necessarily
else's. But you not may not necessarily need to make money from the LLM by itself anymore than Meta. I mean,
obviously Meta has a cloud, but Meta doesn't even try and doesn't sell the cloud. they just use it power their own
cloud. they just use it power their own stuff. So those are the sort of the two
stuff. So those are the sort of the two sort of sort of the two possible outcomes. It may be that you know this
outcomes. It may be that you know this stuff ends up as commodity infrastructure that's sold at zero margin. It may be that no there's two of
margin. It may be that no there's two of these or three of these and they capture a lot of value. There's a sort of price equilibrium question of do you get pricing discipline? Do you have a small
pricing discipline? Do you have a small number of companies and they hold the prices high or not? like as I I wrote about this last month like ask your favorite economist like there are there are words there are terms for these kinds of questions like named after
economists who first ask the question generically but I think the kind of going back to Vij and open AI the problem at the moment is that the raw
chatbot by itself isn't a great product and most people struggle to work out what to do with it you've got um a small number of people mostly in tech and also outside tech who are very
self-optimizing and have certain kinds of jobs that map very well to the kinds of stuff that LLMs are very good at doing. If you look at the usage data,
doing. If you look at the usage data, something like 10% of the population is using these things every day, but another 50% are using it every week or every month. So most people who have a
every month. So most people who have a chat GPT account can't think of anything to do with it today. And so you can ask, well, is that because the models have to get better? And we had this conversation
get better? And we had this conversation before. Is this because the models have
before. Is this because the models have to get better? Or is it because habits have to change? Or at the extreme, could you say, well, people only use Google once a month, once a week, too, which doesn't seem like a good answer. Or do
you have to build a whole bunch of stuff on top, which is back to this commodity infrastructure point? And so it seems
infrastructure point? And so it seems like what a lot of what the question now is like, oh, are we all just going to use chat GBT or anthropic or or Claude
as chat JBT or is that going to be like an invisible API call buried underneath?
that's used by some other thing and it's the other thing that makes all the money. The really kind of interesting
money. The really kind of interesting comparison here would be TSMC because what happened in chips was that there's no network effect but with each generation it got more difficult and more expensive and like the numbers dropped with each generation and now
there's basically one company at the frontier and two or three that are a generation behind and then five or 10 that are a generation behind that and TSMC makes lots of money but like they don't capture all the value of the tech industry like you know as a consumer you
don't know that TSMC makes a chip in your iPhone and they don't get like a cut of every app store sale. So that's
sort of the puzzle. This great strategic problem for Open AI is you basically got commodity technology. You don't have a
commodity technology. You don't have a network effect or when it takes all effect. You've got 900 million weekly
effect. You've got 900 million weekly active users, but most of them are not using it every day and can't think of anything to do with it. And only 5% of them are paying for it. So the usage is like a mile wide but an inch deep. And
so you've kind of got to swap the mind share and the momentum that you have for something more durable. So, there's a a story yesterday that they're trying to do deals with private equity firms to
get into the private equity firms businesses. They did this video sharing
businesses. They did this video sharing app the end of last year that didn't work. And there's an app store. Oh no,
work. And there's an app store. Oh no,
there's another app store. No, are we have are we on the second or the third app store now? I've kind of lost count.
And there's a e-commerce integration, but like guess what? E-commerce
integrations are really really hard and complicated because there's millions of merchants and billions of SKs and Google and Meta have failed at that twice. So
it turns out that OpenAI went eh no we probably can't build that ourselves right now. So there's this sort of
right now. So there's this sort of challenge of how do you get from having the mind share and one of the good
models to having some kind of a durable platform business where you've got developers, users, corporate accounts, something that's locked in that doesn't
just depend on you having the best model next week and the week after that and the week after that. Yeah, you just said u last year when you and I chatted I think we were having this conversation
in those terms that like people were not sure what to do. We went into whether JPD needed a new guey. What's
fascinating is that it's been an extraordinary year in model development with you know reasoning and RL and and all the things >> and the net is that a better model
doesn't solve the problem. This is
something I wrote like a year ago I think where I said what do you mean when you when you it was something like what is a better model and which sounds like a crazy thing to say but I always remember um like years ago when iPhones
was a hot new thing um one of the New York late night TV shows they would go onto the street and they show people last year's phone and say it's a new phone what do you think and they go wow it's amazing cuz like you couldn't
really tell actually >> and you know if you're doing like very specific very hardcore stuff and you're pushing these models to the limit then you'll be able to Oh, it didn't do that and now it can do that. With most stuff,
the models are jagged in lots of different ways. You don't really know
different ways. You don't really know whether it will be able to do that or not or how well it will be able to do that. And most of the stuff that you
that. And most of the stuff that you try, it can sort of do to varying degrees. And now it can sort of do it
degrees. And now it can sort of do it slightly better or maybe slightly less sort of. But if you've got a bunch of
sort of. But if you've got a bunch of use cases where you need the right answer as opposed to sort of the right answer, then saying that the model is better doesn't mean anything. I mean
literally is literally meaningless because what you're telling me is I asked the model to compile 50 things and last month it would get 10 of them wrong. So I'd have to check all 50 and
wrong. So I'd have to check all 50 and now it will get eight of them wrong or 12 but probably eight. So again I'll have to check all 50. So actually
nothing's changed. And the point that things change is not when it goes from 90% right to 91% right to 95% right. The
point it's either right or not right.
And if you don't have that, then you've got to kind of sit and think much harder about what you do with this thing and how you use it and how you build tooling and software around it. Part of the challenge here, I think, is which
actually we could kind of segue to talk about, you know, the sort of the SAS apocalypse, a lot of the challenge is working out how to ask for what you want. And so for the sake of argument,
want. And so for the sake of argument, even if the model had general AI and was like actually really superhuman, not superhuman the way that branding people talk about it was actually really AGI um
in all senses, it's still quite hard for you to describe what you want. At some
point there was a hope that memory would be adding some level of defensibility and mode to all those uh chatbot interfaces including very much chat GPT.
That doesn't seem to have happened. Why
is that?
>> Well, so two or three things. Firstly, I
think very obviously it only works if you're using it a lot already. And so if you haven't worked out a bunch of stuff to do with this, and most people have not, then you don't see the memory. Open
AI did this this data at the end marketing at the end of last year where they gave everyone a cute little graphic of how many messages they done and told everyone what what desile they were in.
And so I went on Reddit and grabbed like 300 screenshots of people posting these.
And it turns out that if you did a,000 posts, if you did a,000 prompts last year, you're in the top 20%. So
basically 80% of people hit return less than a thousand times last year. So
average of less than three times a year.
Obviously charts growing up in the year.
So you can't really average it across the year. But like if you typed return a
the year. But like if you typed return a thousand times in the whole year, you're not really using this every day. And so
there's no memory there. Me then the sort of a subsidiary point would be okay is that a network effect or is that stickiness? It's probably more
stickiness? It's probably more stickiness. And how sticky and what
stickiness. And how sticky and what happens if you just ask T to tell you everything it knows about you and then paste that into Claude or vice versa. So
it was kind of unclear whether that was a thing. I mean the whole thing does
a thing. I mean the whole thing does feel very sort of 1997 in a lot of ways in that it's clear this is a huge deal and it's already starting to work but that doesn't mean you know how anything's going to turn out. And one
thing that occurred to me looking at chat GBT as a product is it's kind of like trying to trying to differentiate the web the chatbot itself is kind of like trying to differentiate a web browser in that you've got an input box and output box. And how can you make
them different if the whole point is that you can type in anything and get anything out? You can make the
anything out? You can make the underlying re rendering engine better, like Chrome has a better rendering engine, fine, and you can make the underlying LLM better, but the actual product that's presented to the user, it's an input box and an output box and
a couple of buttons around the edges. I
mean, it's the same with logos. Like,
you've seen the joke that all the chatbot logos look like buttholes, >> you know, all these little spots, you know, but again, like like what are you supposed to show because like how could you what would is it even possible in principle to make the UI of a of a
chatbot different? Because isn't the
chatbot different? Because isn't the whole point that it's completely universal and there's no UI? M
>> um so you've got this kind of you know back to to Fiji seeming open sort of has this problem that first of all the model itself is the same as everybody else is more or less and if you're again if you're the kind of person that watches this podcast you're probably thinking no
no no it's really different but like for somebody's only using this once a week they really don't see the differences and then it's not clear what you're supposed to build on top of it and is that going to be you and even if you worked it out why wouldn't everybody
else build it too >> yeah you had a really interesting point somewhere about how building product in a foundation model company was fundamentally different from any other company.
>> Oh yeah. Well, this is so so Kevin Wheel and and Mike Quigger gave this made both made this point on stage like 18 months ago or something. And then Fiji Simo made it in like the second hour of a podcast I was listening to at the gym.
You know the joke that like anything the most secure encryption on earth is anything you say in the second hour of your podcast cuz no one will ever hear it. Well, I actually listened all the
it. Well, I actually listened all the way through. Um, and she said, which is
way through. Um, and she said, which is also what what Mike and and Kevin had said, is, you know, the way it works is you you turn on your computer and you you look at your phone in the morning, you've got an email from the research group that says, hey, guess what? We've
got this cool thing. And then your job is to go and do something with it. So,
it's like, you know, you get the email and then we've got a new voice model.
It's like, okay, well, I guess we're adding a microphone button today.
>> Like, that's not you you >> start from the technology.
>> You don't you don't control the product strategy, which is of course how science works, but you don't know what's going to happen. and you don't know what's
to happen. and you don't know what's going to get built. You know, obviously you've got like Sam and and and Dario and so on are like setting that fundamental research strategy, but you know, the thing comes back in 6 months and it may work, it may not. You don't
know what's going to work. You don't
know if it'll h if that will happen. So,
and and you know, fairly or unfairly, I open the last essay I wrote taking that quote from Simi and then comparing it with Steve Jobs famously saying you can't start with the technology and work forward to the product. You've got to start with the user experience and work
backwards to the technology. And you
know this is sort of it's not like anybody doesn't know this but that's sort of inherent in where we are is that this technology is continually shifting and changing and evolving and you don't know what it'll be able to do next week or next month. So you kind of don't know what it is that you're trying to build.
>> Mhm.
>> You're a product you're you're you're a strategy taker not a strategy setter.
And in terms of what one uh can build on top and how those foundation model companies can encourage people to build stuff on top. What what do you make of agents? So I guess a 2025 word but
agents? So I guess a 2025 word but that's even more important in 2026.
So open created a bunch of products agent kit and other uh frameworks to help people build agents on top of the models. Is that like a first step in
models. Is that like a first step in towards a world where they do have defensibility because people will want to build in the same universe.
>> Well so you get to eventually get to a point that people have actually built a different product as opposed to an input box and an output box to a model. not at
all clear to me that an agent is a consumerf facing building your own agents is a consumerf facing thing. It
seems to me either you shouldn't have to know what it's doing or that should be submerged inside some product. Um, now I think Stripe did like they copied like autonomous cars and did like level zero
to level five or agents. And so, you know, level five is that like the agent knows that you're out of dog food and buys you more dog food and you don't even know and it just kind of appears um like fully autonomous ordering and fully
autonomous purchasing, fully autonomously solving the problem. And
where a sort of level naught is like you know here's a picture of a a code work out what it is and the agent goes and your model goes and calls a bunch of agents to solve that question for you.
You don't have to know that that's what it's doing. And so there's certainly
it's doing. And so there's certainly sort of a spectrum of what do you mean by agent? Do you just mean that the LLM
by agent? Do you just mean that the LLM can call different tools but you don't know about it? Or do you mean you can actually get the LLM to go and do a thing for you with or without multiple
tools? It's a very kind of it's a
tools? It's a very kind of it's a slightly kind of fuzzy term. It's a
little bit like saying metaverse, you know, you don't really know what somebody meant when they they they might have meant VR or VR is real. They might
have meant games. Games are real, but when they said metaverse, you didn't really know what it was they were talking about. It's kind of the same
talking about. It's kind of the same with agents. It kind of gets again to
with agents. It kind of gets again to this this sort of it almost kind of deepens the problem in that the more the capabilities expand, the more jagged the thing is. And so the harder it is to
thing is. And so the harder it is to know whether it will be do able to do X or Y, the harder it is for you to kind of mentally map is this the kind of thing that it would or would not be able
to do. So for example, these models
to do. So for example, these models struggle massively to read PDFs. But
that's not a thing that you would like know from looking at it that it will be completely unable to read a PDF like because why why there's not some intuitive reason why you could kind of deduce well of course it won't be able to do that. And some of this is just
familiarity. you know it took a while to
familiarity. you know it took a while to work out you know you can't you could ask Google that it's not going to be able to answer this but it is this sort of puzzle of how do you build product
around representing that um at what point does your model of what it does the the frontier gets smooth enough or your model gets smooth your mental model map closely enough to the front to the the jaggedness of the frontier that you
can kind of understand what it what it could or couldn't do and at what point do you have to map know you actually have to sit and map that to a particular use case but there's another I think about this, which is to say like with
every new thing, you start by getting it to do the stuff you're already used to.
And it kind of takes time to build new things that are native to the new technology. And you can probably split
technology. And you can probably split that apart further just cuz obviously every slide needs to have three bullet points and say like there's sort of a third step which is at what point can you kind of pull the whole thing inside
out and do something that just isn't doesn't bear any kind of connection to the way that you might have thought about the old thing. You know, the sort of obvious example would be, you know, the progression from, okay, Flickr
having a mobile app to Instagram to Snap or Tik Tok because Instagram is taking the desktop experience. Indeed, arguably
still Instagram is still kind of taking the desktop experience except that it adds filters which no one uses anymore.
But Snap says, but this has a camera, so why aren't we starting with a camera?
And then Tik Tok says, and it's a social network, so why aren't we starting with that? And so you're sort of pulling the
that? And so you're sort of pulling the whole thing out. And Tik Tok isn't like a mobile version of Flickr or indeed a mobile version of YouTube anymore. It's
something else. And we're still at the stage of, you know, taking a PDF of your catalog and putting it in your company website.
>> Mhm.
>> Um, as we try and work out what we should do with AI. And of course, that was a really big deal and it's still a really big deal that lots of lots of companies where having a PDF of your catalog on the website is really good
like 30 years later.
>> Yeah. But the jaggedness of what the these things can do is almost kind of mirrored by the jaggedness of where this is important in different industries and for different kind of jobs like which jobs get them affected more or less
which industries get them affected more or less. Is there an argument to say
or less. Is there an argument to say that actually OpenAI would be very well placed to figure out what to build on top of the models because it has the
whole history of queries and prompts and dialogue you know push backs from the user and if everybody asks about travel and travel in a certain way then open
>> the problem is that self- selected behavior that's people what what can people think of whereas you know it's again a Steve Jobs quote You know what you want is to say, "Aha,
I've realized a thing that you could do with this that hasn't occurred to anybody >> because that's where you create billion dollar company trillion dollar companies. You you you you change the
companies. You you you you change the question and you realize that maybe I'm I'm not going to do it like that. I've
realized I can solve that. Here's this
problem. You didn't really realize that problem existed and I'm going to go and work out how way of solving it in some way that doesn't even look like the problem either." And then that's how you
problem either." And then that's how you really change things. Yes. It's like
step one, open AI can look at all the desire pass and pay they can pay for the desire pass. The problem there of course
desire pass. The problem there of course is that they also have to map that against trying to make money. So I think when you look at the the usage data they released last autumn, very little of it was actually e-commerce. There's loads
more people using it for porn but like you can't make money from that.
>> You can make money from e-commerce. So
they try to do e-commerce and advertising. Also they hire a bunch of
advertising. Also they hire a bunch of e-commerce and advertising people for meta. So that's what that's what they
meta. So that's what that's what they know how to do. I mean this is comes back to my point is this commodity infrastructure which is and it's also this is a kind of a quote quote from from Craig Fig at Apple so people are
having a go at Apple that you haven't got any LLMs and Craig says well we don't have Uber or YouTube either >> he didn't actually call it that but obviously but we don't have a video sharing site we don't have a taxi service we provide the platform for
other people to do that and you can't as a platform you can't invent all of those things Google couldn't invent you know Apple and Google could not invent everything that was done on the iPhone and Android and Microsoft couldn't invent everything that was done on the
PC. The trap in that to account that of
PC. The trap in that to account that of course is that you can also say that about Microsoft on the web. Like
Microsoft didn't invent any of the stuff that we did on the web. I mean that's not quite true. They invented Expedia and like they tried tried to do all sorts of stuff. But in the end, all this interesting stuff on the web wasn't just
you still did it on a Windows PC, but it was done with other people's tools, other people's software, other people's capabilities, and the PC in effect was commodity infrastructure that you use to access the web. It became a Chromebook, which is actually what a Mac is now
mostly for most people. So, you've got this on the one hand, on the other hand, like on the one hand, can you possibly create all of the billions of use cases that people are going to come up with the millions of use cases that people going to come up with this stuff? No.
But you want to be the place that you want them to build it for your thing and not be commodity infrastructure for everyone else. You want to be kind of
everyone else. You want to be kind of somewhere in between those two. The
puzzle is that you know this is was my point is you know you don't sign into your SAS app with your corporate AWS account. You sign into it with octa or
account. You sign into it with octa or something but you know nobody no consumer has an AWS account.
>> No no company has an AWS. You know
that's not that's not the right level of abstraction or aggregation. um it's just infrastructure and it does feel at the moment like the LLM is infrastructure and you know another historical analogy
is like I mean I'm just old enough to remember like the early ' 80s with PCs people said that what you should do is you should buy a PC and then you should buy a database program or you should buy like a software development program and you should make your own software so you're a retailer and you want some
inventory software you should buy a PC and you should buy C++ or whatever it would have been Turbo Pascal I don't know wasn't I don't know whatever the programming stuff would have been and you should program your own inventory management software or you should buy
like a database program and you should make your own inventory management software in the database program and neither of those were the right way of thinking about this. In fact, I found this wonderful quote from the New York Times from 1980 talking about Visical
spreadsheets and there some this this guy says of course it was a guy who says well I used to code DCF models in machine code and it would take me 20 hours and now
with Visal I can do it in 15 minutes. So
tell that next time you hear a software developer saying like AI is a completely different thing and nobody has ever abstracted software like this before like yeah we've been doing this for 30 years 50 years um so this is this sort
of question is like where do you create the applications and the use cases who does it how close does that go to the raw chatbot how up the far up the stack does a chatbot do it so now you know it
was it was chat GPT apps which failed like two or three times and of course it's openclaw skills as well or whatever open claw call >> yeah are generally more bullish on Anthropic?
>> Not really. I mean, this week they've got all the fire. They've got all the um the juice. Whatever the word is, I don't
the juice. Whatever the word is, I don't know. This week, this week, next week,
know. This week, this week, next week, next week it'll be something else. Open
floor is kind of interesting to me because it looks a lot kind of like desktop Linux in that like suddenly you you've been having to use this monolithic thing from these big companies where you couldn't really get
your hands on the metal. You were just like you had to get your API key and you type stuff in and they tell you what you could do. And now you can like build it
could do. And now you can like build it yourself with your own hands. It's like
the homebrew computer club in the 70s and you can make it yourself. And then
the other side of this of course is and it just needs to be a little bit more polished and then it'll be ready for everybody which of course Linux people have been saying for 30 years >> and like don't understand like the whole culture of it is just wrong to do that.
And you know you ask it to t what the other interesting thing about about open core is it shows you why Google and Apple haven't shipped this like tidy up my inbox. Okay. I deleted all your
my inbox. Okay. I deleted all your messages. Great.
messages. Great.
>> You're welcome. Yeah. Um, so there's this huge craze around open floor in China, which I obviously I don't speak Chinese is interesting in its own right, but it gets to this sense of how much pentup enthusiasm there is and how many
possibilities there are, but also in paradoxically how hard it is actually to make a real make a real thing out of this. This is kind of the funny thing
this. This is kind of the funny thing about AI assistance is like this is really a great use case except it's much much much harder to actually do it when and be sure that it won't delete your inbox.
>> Yeah. And to the point that you're just making and and also anthropic presumably you're not in the camp of believing that people are going to be building their own software with claude code. It's
funny this is like um it's almost like a straw man except there are like presumably real rational sentient people who actually say this stuff. I don't
think it's particularly interesting to explain why people aren't going to code their own stripes. I think what's more interesting is to kind of do think about a broader taxonomy of software which is that you have like the big eye and
enterprise ERPs and and what they call systems of record. So like you need to store the same data in the same way for 50,000 people and a billion transactions a year. No, you can't have um David
a year. No, you can't have um David Brent from the office like making his own. Then you have but then you have
own. Then you have but then you have tasks and workflows where that single system like call it SAP is too inflexible and so you break it out unbundle it into something dedicated and
vertical you get vertical tool and so this is why the typical big US company today has depending on your numbers like 4 to 500 vertical SAS apps I think you guys invested in um frame.io I think >> yes
>> yeah and frame.io IO is unbundling Google Sheets and Dropbox.
>> Mhm.
>> Like theoretically you that's what people were doing. They were managing the video workflows in Google Sheets and Dropbox and email and it says no we're going to turn this into a dedicated tool and I don't know theoretically you could
have done that in Oracle but like no.
And so you've got the like the vertical tool that's specialized and you've got the horizontal general purpose tool and then you've got this middle case of using Google Sheets and email or using
Excel or exporting a CSV or something where you could almost kind of call it like improvised software where it's not a process that's being done over and over again in the same way the same time and has to be recorded and tracked and
has to have compliance and everything else and it's not kind of enough of it or big enough or no one's realized it's big enough to make a dedicated tool and so you're doing it in Excel or Tableau or Google Sheets. tweets or email or
something.
And now on the one side, AI coding means coding is way cheaper and easier and also means that there's a whole bunch of stuff that you couldn't do in software that now you can. And so there will be
way more software and that will pick up many more of those use cases either that weren't automated before either because they were too small or because you could actually couldn't automate that thing with software before and now with with AI you can.
>> Do you think there could be a related concept of ephemeral software? Well,
this this was going to be the other half of it is now you've got you expand this category of like improvised software and so where something it might have been that you do it as get the CSV and do it
in Excel and or maybe a PL script if you're that kind of person now maybe you'll ask to do it for you analyze this for me do this for me and and often it's like stuff that you couldn't have done
in Excel before you know it's like look at all those PDFs look at all those PowerPoints and then look at this PowerPoint is there something in all of those PowerPoints that we haven't said
in this one which is the kind of thing that's got the sort of fuzzy answer that it's kind of right that works very well with an LLM except it wouldn't work if there were PDFs because you can't read PDF so you have to tell people it can't
read PDFs so that sense of like this sort of middle ground and maybe my taxonomy is wrong but I think that's more interesting as a way of looking than just saying oh everyone will just v code their own their own thing no one
will vibe code their own ERP or their own frame io but they may ask anthropic or Gemini or or or chat GBT can you do this thing for me in a way that's sort
of analogous to they might have done it in Excel >> so what do you think happens to software than than SAS >> way more software >> more software but software as an
independent category in stock markets >> again unpick that because on one level like okay there will be if it's much cheaper and easier to write software there will be more software Who will be
doing that? Who under is is a
doing that? Who under is is a combination of who understands how software works and who has really thought carefully and understood the problem and frame.io was not made by
some guy or some woman working at a video production suite somewhere in Soho because that's not how they think.
They're not software people. They're
also not product people. You know,
there's a different skill to actually work out what should the code be doing and what is the problem and how would you work this out. So there will be way more software both stuff that doesn't
need AI except to create it and stuff that needs to use AI to do the new thing. Some incumbent software will go
thing. Some incumbent software will go away like it will like big expert systems where an LLM can do that better.
It's easy just as but there's also this seems to me like an analogy of SAS or repeat of SAS in that SAS meant that you had an order of magnitude probably two orders of magnitude more software and some incumbents got completely screwed
by this and loads of tasks that you couldn't have automated before got automated because SAS made it much cheaper and much easier to go to market and easier to unlock those problems and so as I said you know when we went from
mainframes like the big company had what like five pieces of software and with onrem you've got dozens of pieces of software and With cloud, you have hundreds of pieces of software and so you should sort of presume that with
this you'll have like way more software.
You'll also have way more stuff being automated where you don't need to dedicate a tool where you don't need to really shut your eyes and work out every step of the workflow in the process and how they should work and what you should do with it. It is just like a one-off
improvised thing where you'll get the model to do it. So you'll have way more stuff being done in software. I mean
this is you know people last year suddenly everyone discovered looked up the Jeffness paradox in Wikipedia >> pretended that they knew all along what it >> Yeah. funny if I actually kind of did
>> Yeah. funny if I actually kind of did know about this but more just because I'm sort of interested in industrial history or something like I vaguely remember hearing about it but it I looked at this again I thought but this is just pricey elasticity
that's all you're really talking about if you make it cheaper and easier to do something you might do the same thing for less money or you might do more for the same amount of money or you might do do more with more money if you have a
completely different ROI which is exactly what you see in like financial services is spreadsheets did not result in a collapse in the number of people working in finance say quite the opposite. You have way more people in
opposite. You have way more people in finance because now it's possible to do all this more all this new stuff that you couldn't have done before. I mean
back when you know if it took you a week to do a DCF like how many you did a DCF that was it and you were done. If it you know if it takes you 10 minutes to do 20
DCFs then you do way more DCFs. Um and
so then the of course you know it's kind of my point about the jaggedness of adoption versus the jaggedness of the what the models do. There's a sort of almost like a cliche that people always talk about, you know, the the market end
reason software is eating the world thing of Uber and Airbnb that Uber didn't sell software to taxi companies.
Airbnb doesn't sell software to hotels.
They change what you mean when you say hotel. Fine. So go look at the market
hotel. Fine. So go look at the market share. And so many cities Uber basically
share. And so many cities Uber basically demolish the taxi business and also unlock huge new demand. So taxis Uber in New York I think Uber rides per day is like triple double or triple what yellow
cabs were and yellow cabs are down by 3/4 or something like ballpark numbers.
Look at hotels.
Okay, hotels maybe hotels are still grown. Maybe they grew a bit slower.
grown. Maybe they grew a bit slower.
Maybe not. Maybe, maybe not. And Airbnb
was mostly additive.
And you dig into that and the answer is why somebody I remember saw saw somebody on social media saying the problem with Benedict is he always says the answer is it depends. It's like well done. Thank
it depends. It's like well done. Thank
you for paying attention.
>> Yes, it depends. Well, is software going to completely change hotels and taxis?
How? Yes. How much?
>> Well, those are completely different things. Yeah.
things. Yeah.
>> So, like my fiance goes on, you know, business trip to like some Midwest American city and she lands at 9:00 at night and she's got a client meeting at 8:00 the next morning and she wants a
gym and, you know, room service and a fridge and a bath and, you know, she's not going to go and stay in an Airbnb.
Like, absolutely zero negative possibility that she's going to go and stay in an Airbnb. And travel is business. Travel is half the travel
business. Travel is half the travel business, hotel business. You
proliferate examples as often as you like. Why did the internet have a bigger
like. Why did the internet have a bigger impact on selling consumer electronics than selling um high fashion? Well, it
it depends.
So, this is the problem I have with people kind of trying to score professions, >> AI exposure because you're kind of directionally right probably like yes,
it feels intuitively like that profession is more exposed to AI than this profession.
>> And we're talking about GDP val or that kind of benchmarks, >> all of that kind of stuff. It's sort of probably directionally right, but it's directionally right in the same way that an analysis you did in 1997 about the
internet would have been directionally right. I mean, most of it would probably
right. I mean, most of it would probably be more or less true. You want to take all the numbers off cuz it's it could to tell yourself that that one is 96.5 and that one is 78. It's just ludicrous.
>> But you would not have got Uber from that analysis. You would have said,
that analysis. You would have said, well, taxi drivers. Well, how would the internet change that? Obviously won't
touch that one at all. You would have said newspapers maybe.
>> Well, there's a even again in hindsight, but at the time newspapers looked at this and thought, well, you know, this is going to be great. I mean, remember the AOL Time Warner deal?
>> What did why did AOL buy Time Warner?
Like a whole bunch of magazines that don't exist anymore. I mean, Warner, too, but like, you know, people didn't really understand that what it was that the internet would do to the media business. And then within that clearly
business. And then within that clearly the impact on like regional newspapers was completely different to the impact on Disney. Disney's fine um regional
on Disney. Disney's fine um regional newspapers disappeared. And so you go
newspapers disappeared. And so you go right in and in hindsight you can say well what really happened was you unbundled physical assets from the underlying product. And if your
underlying product. And if your defensibility was based on owning a physical asset and that physical asset suddenly stopped matter mattering then your whole business model has just exploded. And you could apply that
exploded. And you could apply that learners to Airbnb and also to Uber. But
then of course those two turned out very differently. I think the other the other
differently. I think the other the other side of this I mean you make pull the Uber example through to today. You know
you can do that evaluation and you say well like fitness instructors are fine.
>> Okay. Have you seen so I put my phone in my room and point it at me and turn on the AI with the camera. Why do I need like
camera. Why do I need like >> I don't know. Maybe maybe that won't work. Maybe it will. You can't you can't
work. Maybe it will. You can't you can't know those things at that level of granularity. I think that's kind of the
granularity. I think that's kind of the problem. What you can do is just kind of
problem. What you can do is just kind of point to thought experiments, I suppose, and say, well, here is a way of of here's a test you can apply to look at this field and ask yourself, well, is
that does that field have a question?
Does that field feel had a problem?
>> But do you think that similarly to a lot of what happened with the internet, we need to go through a phase of destruction first before we figure that out? Well, will a lot of these companies
out? Well, will a lot of these companies go bust? Yes. You know, there's I think
go bust? Yes. You know, there's I think I may have mentioned last time there's this classic book on the history of bubbles that in the title is this time it's different, which has a double meaning because a in a bubble people say it's no, it's different. It's not a bubble. And they're wrong. But also
bubble. And they're wrong. But also
there when they say it's different, they're kind of right. Like the com bubble was different from every previous bubble. And what's going on now clearly
bubble. And what's going on now clearly is different from the dot bubble. Like
we don't have loads of IPOs of consumer companies with no profits. Like we don't have any IPOs. Um is not being driven by retail speculation in private in in public market stocks. is not being funded by venture capital apart from
anything else. Um but it can still be a
anything else. Um but it can still be a bubble um because it's an uninteresting observation. I think the the you know
observation. I think the the you know are do we have a lot of overinvestment?
Yes, of course. Will a bunch of this end up not producing a return? Like yes, of course that's like how this stuff works.
Where are the smoking holes in the ground? Bit more difficult to tell. I
ground? Bit more difficult to tell. I
mean you know there are people who you know obviously you can kind of you know look at the neo clouds or look at Oracle and say you know leverage doesn't tend to end well but you know hey I haven't been equity analyst for 20 years. I
don't know.
>> Um, no investment advice here, but you know, you just deterministically you should, you know, go back and look at the internet and look at mobile and make a list of all the stuff that didn't work. All the stuff that was a big deal
work. All the stuff that was a big deal and really exciting and cool and interesting, all the acronyms and companies, concept ideas that didn't work. Well, of course, a bunch of stuff
work. Well, of course, a bunch of stuff that people are working on now is going to not work. Just that's just how the world works. That's how innovation
world works. That's how innovation works. There'll be a whole bunch of
works. There'll be a whole bunch of creative creation and a lot of it won't end up being the thing. I think Hunter Walk once said that money in Silicon Valley is like one of those little rubber ball bouncing balls like you throw it into the room and it just and like there'll be a bunch of people who
get rich for like because the ball hit them at the right time. It's like you know the person who joined WhatsApp 2 days before meta bought it like >> well done nice guy. But
>> and to unpack some of this so you mentioned we're 97. So, we 97 in terms of figuring this out, but are we also 97 in terms of like, oh, there's 99 coming?
You think?
>> You know, you know, you can't call bubbles. You know, it's a joke about the
bubbles. You know, it's a joke about the economist who successfully predicted, you know, 10 of the last five recessions.
>> Yeah.
>> Um, you know, you can't call the timing.
>> Um, if you could, like, we'd be in a different universe. If this is not a
different universe. If this is not a bubble, now it will be. Some of these companies will end up like and you can kind of point to some of the places where that might happen. And obviously
that you know many people know where where people are kind of nervous. Um but
you don't know which and you don't know the timing.
>> Yep.
>> Um what you do know is you've got this kind of hugely consequential technology and that's what drives this all all of this alpha and all of this uncertainty.
>> Great. Going back to big tech perhaps with with Oracle that you just mentioned. Isn't that kind of both a
mentioned. Isn't that kind of both a gamble but also kind of rational for a company like Oracle to be leveraging an old business to turn it into something new?
>> Well, you know, everybody in in generally in a bubble, everybody's a rational actor. Almost everyone's a
rational actor. Almost everyone's a rational actor given their situation.
You know, if you're Sam Alman, you've got a commodity technology. You've got
you're competing with people who have giant legacy cash flows. You don't have your own infrastructure. Don't really
have any differentiation, but you've got massive mind share. So, what do you do?
Well, you try and swap that for hard assets and you try and talk your way into a self-fulfilling prophecy. You try
and swap that for hard assets and you try and turn those lightly engaged 900 million weekly active users into something something more tangible. If
you are Larry Allison, like you've got this very cash generative legacy business that's been in structural decline for 25 years. Most people going through YC have never heard of Oracle.
Like almost literally. Um, no one has invited you to a party since like 1998.
Um, so what do you do when here is this video?
>> You grab onto it with both hands and you burn your way through. The same with Nvidia. I mean, I haven't looked at I
Nvidia. I mean, I haven't looked at I haven't updated my number here, but like I think Q3 last year, I think Nvidia had something over $70 billion of trading 12
month free cash flow. So, they can't give the money to TSMC fast enough. TSMC
won't take it fast enough and TSMC is says like dude this is a cyclical industry go home no we're not going to triple our capacity this year and SML can't ship the stuff fast enough either
so what would you do with that 70 80 90 billion you know put it in two bills or put it into building infrastructure building market position building market share building up your ecosystem >> including in circular deals
>> is you know you know I'm old enough to remember this being this is when this was called vendor financing like vendor financing is, you know, it's a it's as long as you're disclosing it and you're not lying about where the money's going from, there's nothing
wrong with that in principle, but it's leverage and leverage is always fine um when stuff is going up and we've got an awful lot of different kinds of leverage
going on at the moment, whether that's SPVS or um you know, circular revenue or all this kind of stuff. It's all
leverage. I mean, that always works until it stops working and then when it stops working, then you've got a problem. still in the same vein of
problem. still in the same vein of comparison between the com bubble and com crash and and today do you take some comfort in the concept of peace dividends from all of this that it ends
up working out for the economy?
>> One of the really basic ways this is different from every other platform shift. A lot of I spent a lot of time
shift. A lot of I spent a lot of time saying well this is kind of how platform shifts work and this is what happened the last five times. The one way this is unquestionably different is that with all the other platform shifts we knew what the physical limits of the silence were. We knew how it worked. We knew it
were. We knew how it worked. We knew it could what could happen like next month.
So, you didn't know what like the iPhone 3 or 4 would be, but you knew it wouldn't fly and it wouldn't have a one-year battery life. You didn't really know how the internet would evolve in 1997, but you knew that Telos wouldn't give everybody on the world fiber
internet next week. Whereas with LLMs, we kind of don't know the physical limits of how this could evolve. And so,
it may be that, you know, next week we have a paper that means that you can get more or less the same results for 1% of compute. You know, maybe that's a silly
compute. You know, maybe that's a silly statement, but and but it might be 10% of the compute. And we don't know that.
whereas you did know absolutely no no one was going to publish a paper that said you could get like the same compute with 1% of the transistors on a chip but we don't know those physical limits and so we don't know the parameters of what
could and couldn't happen to cause a pricing collapse. What's happened mostly
pricing collapse. What's happened mostly so far is the other way. It's we keep inventing ways of using 10x more tokens.
So you get reasoning models and then you get agents. It's like great we're using
get agents. It's like great we're using 100 times more tokens now. The funny
thing is like predicting token usage to me it's like it's like look being again it's like being in the late 90s and looking at bandwidth consumption you know it's a very I mean it's the metrics that you've got but you know imagine
it's like 2003 and we're saying you know YouTube bandwidth use is doubling every month okay well that sounds good I guess but there's like five or six different multipliers that are producing that and you don't really know what any of them
are the same thing now with AI capex so like the usage is going up and the usage is going up by and there's multipliers that drive the usage up which is more people using it, people using it more
people doing reasoning, people doing video and imaging, people using agents, corporations using this, people using it for coding, people using it running all day. So what do you mean by the tokens
day. So what do you mean by the tokens are going up and then you've got the efficiency gain? So I think Satia said
efficiency gain? So I think Satia said that um Satia Nadella said that like um inference cost halves every three months. Okay, great. So that's pushing
months. Okay, great. So that's pushing it down. But then you're chasing the
it down. But then you're chasing the next model and the next model is always bigger and more expensive and like your model that you have is going to be irrelevant in 6 months if the next model comes. So how long are you chasing the
comes. So how long are you chasing the frontier? And meanwhile like this year
frontier? And meanwhile like this year and we said this at the beginning but like this year Meta will spend over says it will spend over 50% of revenue on C on capex. Not 50% of profits or cash 50%
on capex. Not 50% of profits or cash 50% of revenue on capex.
>> So Google and Microsoft are not far behind but they can't double that again.
they can't spend 100% of revenue on capex next year and they've already gone out and started borrowing money and then there's a whole bunch of stuff in leases and so like the actual capex number is doubledigit percentage numbers higher than that if you look at the leases and
and and the and all of that kind of stuff. So there's a sort of financial
stuff. So there's a sort of financial gravity point at a certain point which is they can't keep increasing at that rate. Like it can't be it can't be $2
rate. Like it can't be it can't be $2 trillion a year next year. Say that's
maybe that's the only like financial limit like actual hard limit we have.
And so you kind of got this envelope of how long can you keep building this stuff out. Is it just that we're
stuff out. Is it just that we're building the factory and you're going to get to a point is back to my point about oligopies we talked about a while ago.
Are we going to are we going to get to an end state where there are say four companies that each spend $250 billion $500 billion a year each on building and maintaining this stuff? How what's the
what's the lifespan of the chips? Like
you know is it going to look like this or is it going to do like that? Are we
going to get to a point where the industry collectively spends$1 to2 trillion a year on infrastructure every year and basically makes marginal cost on that? Mhm.
on that? Mhm.
>> Then what happens on top? And then you get to these kind of very vague kind of TAM questions where you know Sam Alman and Jensen say like the TAM is global
GDP and no no it's more than global GDP.
Global GBT is what 70 trillion something like that. Yeah.
like that. Yeah.
>> So we're going to double GDP like and we're 5% of that will go to SAM and the other 5% will go to Jensen and some of it will go to SML like great. Thank you.
there's like a big fallacy that's built into those numbers which a lot of it is obviously it's not just the software TAM but it's the entire economy services and
I think this idea that uh you could be charging for AI at the same price as you would charge for a human worker like doesn't seem right to me. No, I mean a
sort of a micro observation is that you know it's back to the the GDP eval thing of like the fin New York finance professor who tried to estimate the TAM for Uber by saying he said he'd value Uber because this is the TAM for taxis
and say that's Uber's TAM which is no that's not Uber's TAM. Uber's TAM is something completely different. you get
to something like advertising, the average the TAM for digital advertising has to include retail rents and and it has to include shipping and it basically has to include retailer margin which is 50% of retail. So the TAM for
advertising for digital advertising is not advertising, it's something else.
And yes, you know GDP growth is GDP growth. You would have to be quite
growth. You would have to be quite wildeyed to think that, you know, we're going to go from whatever it is low singledigit GDP growth to double digits GDP growth. That would be like different
GDP growth. That would be like different conversation. the human labor thing. I
conversation. the human labor thing. I
mean, look, you know, go right back to like academic discussions of the industrial revolution. Like, you know,
industrial revolution. Like, you know, automation happens in places where labor's expensive because if labor's cheap, why bother? I was reading a piece a book a couple of weeks ago about Rome about Italy in the 30s in the south.
There's somebody in the south of Italy in the 30s. It's like there's one car in the whole province and it's like, well, why didn't you like you get a car to do that? Well, because people are free. You
that? Well, because people are free. You
just tell a person, you tell a peasant and so there's still peasants. you tell
a peasant to walk at 8 hours to that town and get something and walk 8 hours back and they do it >> and you don't even have to pay them. So
why would you buy a car?
>> Um so that's like the very basic first year economics student answer to this is you know the automation has to have an ROI relative to a person. Then you get back to the price elasticity point and to me and again I have a slide with
three bullet points. Friend of mine said Benedict you talk in slides which is is probably true link in slides. And so
step one is okay, if you make it cheaper and easier to do a given thing, do you do the same thing for less money or do you do more for more money or more people? Step two is was having a
people? Step two is was having a building full of people doing that your moat. So the the example people kick
moat. So the the example people kick around here is health insurance. I think
somebody said the marginal cost of arguing is now zero. So if your business is based on making it really painful and difficult to do what you do and suddenly it's not painful and difficult was that
your barrier to entry but then to me the interesting is the sort of the third step which is there's a whole class of stuff that you just couldn't do at all you know go back to talking about the Jeanous paradox and steam engines imagine if you want to make an express
train from you know London to Scotland in 1800 like never mind that you didn't have steel and could no iron couldn't massproduce iron and you couldn't have laid the rails but presum you could have built the whole thing but you didn't
have steam Imagine you couldn't like buy 10,000 horses and put them on the front and have the horses pull your train to Scotland that you just it didn't matter.
It doesn't matter how many horses you have. You can't you just can't do that.
have. You can't you just can't do that.
But you know there's there's whole businesses now where you could not do that with people. No matter how many people you had like you couldn't run a quant fund if you had 10,000 people instead of a computer. Even even if you
could pay the 10,000 people you just couldn't do it. Um years ago I think somebody I can't remember who it was said you know there's a huge difference between when something gets very cheap and when something gets actually free uh which is what happened with internet
bandwidth at a certain point and happened with mobile bandwidth and again it's like are you doing the old thing with a new thing or you doing something different with a new thing which is a sort of cir sort of sort of discursive
way of of talking about your question but you know we're going to have people do stuff with AI that doesn't replace a It does a thing that would have needed a
million people, which is what happens with all automation. That's what
happened with steam agents. It's what
happened with electricity, what happened with aircraft and and and steel and everything else. You do see, you know,
everything else. You do see, you know, you don't do stuff that's as though you had 100 horses or thousand horses. You
do a thing you couldn't couldn't have done with horses or no matter how many how many horses you had. And so like yes well but that's also you know been the you know and I'm you know I'm not an academic economist but you know there's
the famous quote that you can see the information revolution everywhere except in the productivity statistics and so then you get these arguments about well what's in the productivity statistics and you know this you know switching
from buying 40 devices that cost thousands of dollars to buying this is a decline in GDP which is a sort of there's a measurement problem and a metrics problem in all of those things
but you know to the core of Are you going to replace McKenzie with AI? Well, that's like saying, are you
AI? Well, that's like saying, are you going to replace KPMG or PWC with spreadsheets?
Kind of not the right way of thinking about it because on the one hand, you get that pricey elasticity. On the other hand, if you ask that question, you kind of just told me you don't actually know
really know anything about what McKenzie does. I I made the mistake of replying
does. I I made the mistake of replying to somebody on LinkedIn who was talking about this and they said, "But look, publicly listed consultancies have already crashed, you know, but but they're all partnerships." Look, Booze
Allen has crashed. It's like,
>> "Okay, do you need me to explain why Booze Allen's share price booze share price has collapsed since Trump was sworn in >> because it's not to do with AI." It's a whole other conversation. I I wanted to
do digging into this accounting thing.
What What did automation do to accounting in the past? I decided maybe I should go and look for audit costs and it turns out that you can get a chart of average audit data all sorts of
data for audit costs since the Barnes which has basically been flat since the early 2000s despite everything that's happened in software average audit cost hasn't changed but then then I kind of well I can dig a bit further and got into what happened to audit costs in the
70s and ' 80s.
>> Yeah. And what you get is, you know, all these like 50page academic reports that mention all sorts of stuff that changed audit cost and what was going on in audit. And then none of them mentioned
audit. And then none of them mentioned computers. There's like the 700 other
computers. There's like the 700 other things going on that are changing how audit works, none of which are actually computers. It's sort of and and it
computers. It's sort of and and it occurred to me this sort of applies in every industry. So I, you know, was at
every industry. So I, you know, was at an event like 2 years ago listening to Martin Soul who founded WPP which is one of the big global ad networks and he was talking about AI and he says he said no
no the big impact of AI is not making more advertising assets. The big impact of AI is that actually every big ad agency has got buildings full of people doing really really boring crap with
spreadsheets and faxes loading the ads into Facebook >> almost literally. One of the huge impacts of a AI and advertising is on the operations, not on the stuff that you if you know nothing about
advertising, you would just think, well, it'll make more pictures. Like, well,
yes, but no, the big thing is all the guys in the back office shuffling a paper around. And I think the kind of a
paper around. And I think the kind of a point a point applies to every industry that like you you're sitting on the outside, you can go, oh, of course AI will do that. And if you're on the inside, you're like over here and it's over here. And that's
not the hard part.
>> Yeah. and you talk to a lot of big corporations as as part of your consulting and your public speaking.
What's your sense of the overall sentiment? Are people more bewildered
sentiment? Are people more bewildered than they used to be? Is some of this starting to sink in? What do you see them do?
>> So, I certainly wouldn't say bewildered.
I mean, at least not any more bewildered than anyone in tech. And if I think you know what's the center line, you know, you're if you're in control, you're not going fast enough. Like if you understand any of this, you're not paying attention. I was like there was I
paying attention. I was like there was I came across the other day that the physic famous physics professor says you know I'm going to teach you an eightweek class on quantum today I don't understand quantum at the end of this
class you won't understand quantum either um certainly applies to adtech but you know it's like the schestein question you know that the the joke about that oh
the coin question is this famously complicated 19th century politics question and I think Palmerston said only three people have ever understood that my um the German professor who's gone mad um somebody at Bismar who is
dead and myself who has forgotten. Um
anyway, you can cut that out. The point
is that like no one really knows the answers to like does it go to a AI and AGI and what will it do next year and but everyone's got a bunch of stuff deployed.
Everyone deployed CPilot and went, "Oh, okay. That wasn't very successful."
okay. That wasn't very successful."
Every It's kind of like giving everybody the internet in 1997 and saying, "There you are. Be more productive. Doesn't
you are. Be more productive. Doesn't
didn't really work." Everyone now has a bunch of pilots. Some of those pilots have made their way into production.
Some of them haven't. Some of them work, some of them didn't. Everyone is sort of scratching their head and thinking, "Okay, now what?" And I think actually probably a much more there's certainly a
curve of awareness. So, you know, I talked to some pig industries where my brief before I go up is they say, okay, like like I spoke at an insurance conference a couple of weeks ago and
part of the brief was so everybody here remembers people talking about how drones were going to change insurance.
So, we need to like make it clear this is not another of those things. This is
a bit more real. And so there are people who are kind of at that that stage in the curve which also maps to how hard it is to work out what you would do with a nondeterministic system. Lawyers are
nondeterministic system. Lawyers are probably somewhere in the middle on that in like obviously this is useful but also it's really easy to get yourself killed with it. But there's also like a the head of the curve is more like okay
is that it?
So like I spoke to the a CMO of a big retailer last summer and they said okay we've all got review summarization.
we've got a support chatbot and then we took that down because it was a dis disaster and then maybe we'll put one up that isn't a disaster and they've got like natural language search so you can say you know what should I buy for a
picnic you know this is a classic shade use case from Walmart and they've got review summarization and they've got a skew tagging project going on and the marketing team is using it and they've got like eight different projects
automating like obscure back end and things inside deep inside the company and like Walmart will say you like a lot of people like we've been doing machine learning since the 80s like we've been doing are since the 80s like this is
like this is more stuff we've got a huge tech organization I think which kind of comes back to the framing I've used a lot which is I think a lot of industries
now are well down the path of we can use this to do a thing we were already doing but better and maybe we've can see a few things we can do with this that we couldn't do before or that didn't work
very well and now they'll kind of work.
So, I was at a logistics conference a while ago and they were talking about, you know, the warehouse person can just kind of point their phone at a box, bunch of boxes on a pallet and it will
pick up all of the your all of the barcodes at once and you like you think like, yeah, and then you realize, okay, that's probably like an hour a day >> that's saved. So, there's a lot of like
individual point solutions where people are deploying stuff. I think where it's much harder to get your head around is still yes, but what's the new new things
and what would the fun what might the fundamental change look like as opposed to great now we've put our catalog in a PDF on our website and I don't think anybody in tech knows that either.
everyone is still sort of, you know, which is why you get like I think in the last week both Sequoia and Kler did like a 2 by two suggestions of they're just
like suggestions of well what might the axis be? What might useful ways be to
axis be? What might useful ways be to think about this?
>> Yeah, I was about to um ask you what what that means ultimately for AI builders and startups is one takeaway that um all of this is still justly
fluid and yet to be determined.
Therefore, there are pockets of opportunities in different places.
>> Paradoxically, the answer for a builder is actually super easy, which is, you know, go back to to frame.io, which you guys invested in, and we and Ed, when I was at AC, you can see I was in the pitch, we didn't invest in it. I can
tell you why afterwards.
Um, in principle, you could have had that idea at any time in the previous decade. I mean, maybe some of the
decade. I mean, maybe some of the technical implementation might not have worked, doing it in the browser might not have worked, but it's basically Google Docs for video. And you know, set aside the difficulty of doing like that kind of video in the browser.
Theoretically, you could have done that in 2010. Most SAS companies are database
in 2010. Most SAS companies are database rappers where somebody realized that here is this problem and here is the people who have it and here is a way of
turning it 90° and this is your insertion point. This
is how you build it and take it to market.
And the hard part of that was not writing a bunch of SQL queries and setting up some database tables on AWS.
And the hard part of writing software is not writing the code. It's all the other stuff around like what what should the code be doing and how would we tell people that they should be using it and what should we charge and how do we go
to market and which bit of the market should we be selling to. It's all the other stuff and I don't think that has changed at all. It's just that now on the one side it's way quicker and cheaper and easier to build our software and on the other hand there's a whole
class of stuff that you couldn't do with software before that now you can. Um I
mean one of the the the I remember looking at machine learning when machine learning was AI. Um I think I said this last time there's like there's a progression like it's only machine it's only AI when it's new. It's like
technology it's not technology after about 10 years. The point was that machine learning comes as image recognition first. That's what starts
recognition first. That's what starts working with imageet. I would show this to like German reinsurance company and they would say well that's great but we don't have any images I mean maybe like damaged assessment but really we don't
that's not what we do and it took a while to work out that this was pattern recognition and once you can kind of conceptualize it as pattern recognition then it's a lot easier to think well what can we turn into pattern recognition of course there was also a class of stuff where people turned it
into image recognition where it wasn't image recognition before but in principle you kind of have to think well what's the right level of abstraction for understanding what this is um and the same thing now with with with generative AI I like I don't think we've
got like a good sense of what it is which is why people say well you know I used to say AI gives you infinite interns or you know is it interns is it associates is it automation is it like
the next stage of humanity is it something none of those answers are going to be right exactly but you kind of want them to be kind of useful as
ways of thinking about where you might look for those opportunities but I don't know as a builder like what would be an interesting project is what percentage of entrepreneurs went out to look for an
opportunity and what percentage started from seeing the problem because there are certainly some entrepreneurs who say I want to do a startup I'm going to look for XY Z and then there are
entrepreneurs who say I believe mobile is the thing I believe AI is the thing or machine learning I'm going to look for things I can do with that and then there are people who are like for like the last 10 years of my life I've
desperately wanted to fix this problem in this industry and I suspect that it's the third category that produce the best outcomes. I don't know. It would be
outcomes. I don't know. It would be interesting. It' be an interesting
interesting. It' be an interesting analysis. Yes, you can pick up AI and go
analysis. Yes, you can pick up AI and go around looking hitting everything with it. You know, it's a hammer and
it. You know, it's a hammer and everything's a nail. Or you can start from the other end and think, you know, now finally I can solve that problem or I can turn this into a machine learning problem. And you know, I think the
problem. And you know, I think the probably the right answer is those are all those all produce great companies.
>> All right, Ben, but we could quite literally go on for hours. This is so fascinating. I love our conversations.
fascinating. I love our conversations.
Thank you so much for being back on the Mad Pod and um look forward to the next one.
>> Great. Thank you.
>> Thank you.
>> Hi, it's Matt Turk again. Thanks for
listening to this episode of the Mad Podcast. If you enjoyed it, we'd be very
Podcast. If you enjoyed it, we'd be very grateful if you would consider subscribing if you haven't already or leaving a positive review or comment on whichever platform you're watching this or listening to this episode from. This
really helps us build a podcast and get great guests. Thanks, and see you at the
great guests. Thanks, and see you at the next episode.
Loading video analysis...