AI Eats the World: Benedict Evans on the Next Platform Shift
By a16z
Summary
Topics Covered
- AI Redefines as Mature Software
- AGI Forever Five Years Away
- AI Unknown Physical Limits
- Platform Shifts Inevitably Bubble
- Most Users Find No Daily AI Use
Full Transcript
Chat GPT has got 8 or 900 million weekly active users. And if you're the kind of person who is using this for hours every day, ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can't think of anything to do with it this week or next week. The term AI is a little bit like the term technology. When something's been around any for a while, it's not AI anymore. Is machine learning still AI? I don't know.
In actual general usage, AI seems to mean new stuff. And AGI seems new scary stuff. >> AGI seems to be a bit a little bit like this. Like either it's already here and it's just small software or it's 5 years away and will always be 5 years away. We don't know the physical limits of this technology and so we don't know how much better it can get. You've got Samman saying we've got PhD level researchers right now and Demis says no we don't shut up. Very new, very very big, very
very exciting world changing things tend to lead bubbles. So yeah, if we're not in a bubble now, we will be.
>> Benedict, welcome back to the Asenz podcast. >> Good to be back. >> We're here to discuss your latest presentation, AI eats the world. So for those who haven't read it yet, maybe we can go share the highlevel thesis and and maybe contextualize it in light of recent AI presentations, I'm curious how how your thinking h has evolved. >> Yeah, it's funny. One of the slides in the debt references a conversation where I had with a big company CMO who said we've all had lots of AI presentations
now like we've had the Google one and we've had the Google one and the Microsoft one. We've had the Bane one and the BCG one. We've had the one from from Accenture and the one from our ad agency. Um so now what? So um what's it's it's sort of 90 odd slides. So there kind of there's a bunch of different things I'm trying to get at. One of them is I think just to say well if this is a platform shift or more than a platform shift how do platform shifts tend to work? What are the things
that we tend to see in it? And how many of those patterns can we see being repeated now? And of course those some of the patterns that come out of that are things like bubbles but another others are that lots of stuff changes inside the tech industry and you know there are winners and losers and people who were dominant end up becoming irrelevant and then there were new billion trillion dollar companies created but then there's also what does this mean outside the tech industry
because if we think back over the last waves of platform shifts there were some industries where this changed everything and created and uncreated industries and there others where this was just kind of a useful tool like so you know if you're in the newspaper business that had a very different impact the last 30 years look very different to if you were in the cement business where you know the internet was just kind of useful but didn't really change the nature of your
industry very much and so what I tried to do is give people a sense of well what is it that's going on in tech how much money are we spending what are we trying to do what are the unanswered questions what might or might not happen um within the tech industry But then outside technology, how does this tend to play out? What seems to be happening at the moment? How is this manifesting into tools and deployment and new use cases and new behaviors? And then as we kind of step back from
all of this, how many times have we get again, how many times have we gone through all of this before? You know, they it's funny. I went on a podcast this summer and I sort of opening line I said something like, well, you know, I'm a centrist. I think this is as big a deal as the internet or smartphones, but only as big a deal as the internet or smartphones. And there's like 200 YouTube commenters underneath saying, you know, this more and he doesn't understand how big this is. And I think,
well, >> it was kind of a big deal. It was kind of a big deal. >> Um, and you know, I sort of finished the day by by looking at elevators cuz I I live in an apartment building in Manhattan and we have an attended elevator, which means it's there's a hand there's no buttons, there's an accelerator and a brake and the doorman gets in and drives you to your floor. this street car. And in the ' 50s, Otis deployed automatic elevators. And then you get in and you press a button. And
they marketed it by saying, "Ah, it's got electronic politeness." Your which means the infrared beam. And today when you get into an elevator, you don't say, "Ah, I'm using an electronic elevator. It's automatic. It's just a lift." which is what happened with databases and with the web and with smartphones and I kind of think now it's funny I did I've done a couple of polls on this in LinkedIn and threads of like is machine learning still AI is AI is kind of in AI is the word the term
AI is a little bit like the term technology or automation it see it only kind of applies when something's new once something's been around any for a while it's not AI AI anymore so like databases certainly aren't AI is machine learning still AI Hey. Uh, I don't know. I mean, and there's obviously there's like an academic definition where people say, "This guy's an idiot." No, of course I'm going to explain the definition of AI, but then in in actual general usage, AI seems to mean new stuff.
>> Yeah. And AGI seems, you know, like new scary stuff. >> Yeah. It's funny. There's I was thinking about this. There's an old theologian's joke that um the problem for Jews is that you wait and wait and wait for the Messiah and he never comes. And the problem for Christians is that he came and nothing happened. like you know the world didn't change like there is still sin you know like like like all practical purposes nothing happened and AGI seems to be a bit a little bit like
this like either it's already here and so you've got Sam Alman saying we've got PhD level researchers right now and Demis says what no we don't shut it and so either it's already here and it's just more software or it's 5 years away and will always be 5 years away. >> Yeah. Yeah. It's um it's interesting. Let's compare back to previous platform shifts because some people you know look at you know something like the internet and say hey there were net new trillion
dollar companies you know Facebook and Google u that that were created from it and just sort of all sorts of new emerging winners whereas they look at something like mobile and say hey you know there were big companies like Uber and Snap and and uh Instagram and WhatsApp but these were you know these were billion dollar outcomes or or tens of billion dollar outcomes but really the big winners were were in fact Facebook and Google. Um, and and so in some sense, mobile perhaps was
sustaining. Um, you feel free to quibble with the definition of, you know, sustaining disruptive, but sustaining in the sense that maybe more of the value went to incumbents or or or companies that existed prior to the to the to the shift. I'm I'm I'm curious how you think about AI in in in in light of that in terms of is it enabling is more of the gains coming from you know net new going to come to net new companies like like open AI and anthropic and others that
sustaining. Um, you feel free to quibble with the definition of, you know, sustaining disruptive, but sustaining in the sense that maybe more of the value went to incumbents or or or companies that existed prior to the to the to the shift. I'm I'm I'm curious how you think about AI in in in in light of that in terms of is it enabling is more of the gains coming from you know net new going to come to net new companies like like open AI and anthropic and others that
that follow or um you know are more of the gains going to be captured by you know Microsoft and and and and Google and Facebook and and meta and you know companies that existed prior. >> So I think it's well there's several answers to this. One of them is like you kind of have to be careful about like framings and structures and things because you end up arguing about the framing and the definition rather than arguing about what's going to happen. And you know they're all useful but they
all they've all got holes in them. And you know that what what what mobile did was it kind of it it it shifted us you know there's a bunch of things that it changed fundamentally. It shifted us from the web to apps for example and it gave everybody on the world a a phone. It gave everybody on the world a pocket computer. So even today there's less than a billion consumer PCs on earth and there's something between five and six billion smartphones and um it made
possible things that would not have been possible without it whether that's Tik Tok or arguably I think things like online dating and you know the the you can map those against dollar value you can also map those against kind of structural change in consumer behavior and access to information and things and I think you could certainly argue that Meta would be a much smaller company if it wasn't for mobile for example. So you know you can kind of argue the puts and
calls on on on this stuff a lot. Um there certainly you know not all platform shifts are the same and you know you can do the sort of standard sort of teology of say well there were mainframes and then PCs and then the web and then smartphones but you kind of want to put SAS in there somewhere and you kind of want to put open source in there and maybe you want to put databases and so you know these are kind of useful framings but like they're not predictive. They don't tell you what's
going to happen. they just kind of give you one way of understanding what seems some of the patterns that that that we have here. Um and of course the big debate around generative AI is this is is just another platform shift or is it something more than that and of course the problem is we don't know and we don't have any way of knowing other than waiting to see. So this may be as big as PCs or the web or SAS or or open source or something or it may be as big as
computing and then you got the very overexited people living in group houses in Berkeley who think you know this is as big as fire or something. Well, well, great. Um, but but but does this create new companies? I mean, you go back to the mobile, you know, there was a time when people thought that blogs were going to be a diff different to the web, which seems weird now, like Google needed like a separate blog search. This was seriously this was a thing. Um,
there was a time when it was really not clear, and I think you kind of generalize his point. You go back to the internet in the mid '9s, you know, we kind of knew this was going to be a big thing. We didn't really know it was going to be the web. So before that, we didn't know it was going to be the internet. We knew there were going to be networks. We w didn't know it wasn't clear it was going to be the internet. Then it wasn't clear it was going to be the web. Then it wasn't
really clear how the web was going to work. And you know when when Netscape launched like Mark Zuckerberg was in junior high or you know elementary school or something and you know Larry and Sergey were students and like Amazon was a bookstore. So you can know it but not know it. And you could make the same point about smartphones like it was we knew everyone was going to have an internet connected thing in their pocket but it was not clear it was basically going to be a PC from this has been PC
company from the 80s and a search engine company. It was not clear it wasn't going to be Nocu Microsoft. See I think you have to be super careful in like predict making making kind of deterministic predictions about this. What you can do is say, well, when this stuff happens, everything changes. And that's happened five or 10 times before. >> I'm curious how you got conviction in this idea or we got the prediction that hey, AI is going to be as big as the internet, which of course is pretty big,
but I'm not yet I benedict. I'm not yet at the conviction that it's going to be any bigger. I'm curious what what sort of inspires that sort of uh, you know, sort of state statement. And then also, what might change your mind either way? you know that it might not be as big as the internet because of course the internet was obviously very big uh but also that hey perhaps it might be bigger >> well so I think you know I don't want to I remember I made a diagram of kind of
scurves kind of going up slightly someone said well what's the axis on this diagram I you know I don't want to kind of get into you know is this is this 5% bigger than than internet or is it 20% bigger I think the question is more like is it another of these industry cycles or is it a much more fundamental change in in what technology can be is it more like computing or electricity as a sort of structural change rather than here's a whole bunch more stuff we can do with computers. I
think that's sort of the the question and there's a funny sort of disconnect I think in in looking at debates about this within tech because you know I watched this this this this one of the um open AI live streams a couple of weeks ago and they spend the first 20 minutes talking about how they're going to have like human level PhD level AI researchers like next year and then the second half of the stream is oh and here's our API stack that's going to enable hundreds and thousands of new
software developers just like Windows and in fact literally quote Bill Gates and you think well those can't kind both be true. Like either I've got a thing which is a PhD level AI researcher which by implication is like a PhD level CPA. >> Yeah. >> Or I've got a new piece of software that does my taxes for me and well which is it? Either this thing is going to be like human level and some and that's a very very challenging problematic complicated statement or this is going
to let us make more software that can do more things the software couldn't be. And I think there's a real like schizophrenia in conversations around this because like scaling laws and it's going to scale all the way and meanwhile I'm going here look how good it is at writing code and again like well is it writing code or do we not need software anymore because in principle if the models keep scaling nobody's going to write code anymore. You'll just say to the model like hey can you do this thing
for me? >> Yeah. Is it a little bit of a hedge or like a sequencing thing or? >> Well, it's a it's some of it's a sequencing thing, but you know, in principle, if you think this stuff is going to keep scaling, like why are you investing in a software company? >> Yeah. >> Like, you know, we'll just have this like god in a box that can do everything. And and and I think this is but this is the the kind of the funny kind of challenge and this is I think the fundamental way that this is
for me? >> Yeah. Is it a little bit of a hedge or like a sequencing thing or? >> Well, it's a it's some of it's a sequencing thing, but you know, in principle, if you think this stuff is going to keep scaling, like why are you investing in a software company? >> Yeah. >> Like, you know, we'll just have this like god in a box that can do everything. And and and I think this is but this is the the kind of the funny kind of challenge and this is I think the fundamental way that this is
different from previous platform shifts is that with the internet or with mobile or being deemed with mobile main frame is like you didn't know what was going to happen in the next couple of years you didn't know that what Amazon would become and you didn't know how Netscape was going to work out and you didn't know what next year's iPhone was going to be and 10 years ago when we cared about that you kind of knew the physical limits like you knew in 1995 you knew
that Telos were not going to give everybody gigabit fiber next and you knew that the iPhone wasn't going to like have a year's battery life and unroll and have a projector and fly or whatever. But we don't know the physical limits of this technology because we don't really have a good theoretical understanding of why it works so well. Nor indeed do we have a good theoretical understanding of what human intelligence is and so we don't know how much better it can get. So you
can do you could do a chart and you could say well you know this is the road map for modems and this is the road map for DSL and this is how fast DSL will be and then you can make some guesses about how quickly Telos will deploy DSL and then you can say well clearly we're not going to be able to replace broadcast TV with streaming in 1998 but we don't have an equivalent way of modeling this stuff to know what is the fundamental capability of it going to look like in 3 years um which gets you
to these kind of slightly vibes-based forecasting where no one really knows. So, you know, Jeff In says, "Well, I feel like" and Demis says, "Well, I feel like," but no one knows. >> And then Karpathi goes into our podcast and says, "I feel like, you know, it's a decade out." >> Yeah, I know. Well, I saw this this meme of um of what's but like he says like the answer will reveal itself. And somebody like me'd I'm going to say photoshopped, but of course he wouldn't have been
photoshopped. turned him into a Buddhist monk wearing like an orange like an orange outfit like the future will reveal itself. Well, but this is the problem. We don't know. We don't have a way of modeling this. >> Yeah. And so let's connect this to sort of the you know the upfront investment that some of these companies are making. Um because we don't know you know is there a risk of overinvestment leading to some you know potential uh you know bubble-l like mechanics or h how do you
photoshopped. turned him into a Buddhist monk wearing like an orange like an orange outfit like the future will reveal itself. Well, but this is the problem. We don't know. We don't have a way of modeling this. >> Yeah. And so let's connect this to sort of the you know the upfront investment that some of these companies are making. Um because we don't know you know is there a risk of overinvestment leading to some you know potential uh you know bubble-l like mechanics or h how do you
think about that that question? Well, deterministically, very new, very, very big, very, very exciting world's changing things tend to lead to bubbles. >> Yeah. >> And you I don't think anybody would dispute that you can see some bubbly behavior now. And you know, you can argue about what kind of bubble, but again, like that doesn't have very much predictive power. And you know, one of the the features of bubbles is that when everything's going, you know, everything
goes up all at once and everyone looks like a genius and everyone leverages and cross leverages and does circular revenue and that's great until it's not. Um, and then you get a kind of a ratchet effect as it goes back down again. Um, so yeah, if we're not in a bubble now, we will be. I remember Mark Andre saying, you know, 1997 was not a bubble. 98 was not a bubble. 99 was a bubble. Um, are we in 97 now or 98 or 99? I you know if we could predict that you know
we'd live in a parallel universe. Um I think you know to the there's I suppose maybe kind of two more specific more more tangible answers to this. The first of them is we don't really know what the compute requirements of this stuff are going to be and forecasting that except like more and forecasting that feels a lot like trying to forecast like bandwidth use in the late '9s. Imagine if you were trying to do the algebra on that. You say, "Well, this many users,
you know, how much bandwidth does a web page use? How will that change? How will that change as bandwidth gets faster? What happens with video? What kind of video? What bandwidth of what what bit rate of video? How long do people watch a video? How much video?" And then you'd like you you could build the spreadsheet and it would tell you what bit rate would what global bandwidth consumption would be in 10 years. And then you could try and use that to back calculate how
many routters is this going going to sell. And you could get a number but it wouldn't be the number you know there'd be a you know hundfold range of possible outcomes from that and you could you know you could make the same point about algebra of of consumption now. So you know right now we have a bunch of rational actors saying well this stuff is transformative and a huge threat and we can't keep up with demand for it now and as far as we know the demand is going to keep going up. And you know,
we've had a variety of quotes from all of the hyperscalers basically saying the downside of not investing is bigger than the downside of overinvesting. Um that's um or that kind of thing always works well until it doesn't. >> Yeah. >> Um and I saw slightly strange quote from Mark Zuckerberg saying, "Well, if it turns out that we've overinvested, we can just re resell the capacity." And I thought, let me just like stop you there, Mark, cuz if it turns out that
you can't use your capacity, everybody else going to have loads of spare capacity as well. >> Yeah. >> All these people now who are desperate for more capacity, if it turns out we can get the same results for hundreds of the compute. >> That will be true for everyone else too, not just you. >> Yeah. So yeah, you know, in a investment cycle like this, you tend to get overinvestment, but then after that, there's very limited predictions you can make about what's going to happen. I
think the the more useful kind of way to look at this is to think well you've got these kind of transformative capabilities that are already increasing the value of your existing products if you're you're Google or Meta or Amazon and you're going to be able to use them to build a bunch more stuff and why would you want to let somebody else do that rather than you doing it as long as you're able to keep funding and selling what you're building. >> Yeah. >> And it may well turn out that you know
we have an evolution of models in the next year. That means you can get the same result for 100 of the compute that you're using today. Bearing in mind that it's already going down like depend pick your numbers 20 30 40 times a year. >> Yeah. >> But then the usage is going up. So you're in this very, as I said, it's like trying to predict bandwidth consumption in the late 90s, early 2000s. You know, you can you can throw all the parameters out, but it doesn't
get you to something useful. You just kind of need to step back and say, "Yeah, but is this internet thing any good?" >> Well, yeah, cuz I'm curious if the bottlenecks are if you see them as more on the supply side or the demand side, you know, more technical constraints or is just is AI any good? Are are there enough use cases to to justify the the the type of SP what are what are you seeing and and what are you predicting? >> So maybe two answers to this question.
The first of them is I think we've had this sort of a a bifocation of what all the questions are. So there are now very very detailed conversations about chips and then very very detailed conversations about data centers and about funding for data centers and then about what does a a new enterprise SAS company built on AI? what margins will it have and how much money does it need to raise and so there are venture capital conversations and so there are many different conversations within
which like I don't know anything about chips you know I can spell ultraviolet but like I don't know what like an ultraviolet process is um it's like it's more it's more more violets I don't know um and so you've got this you know it's like the the Milton Friedman line no one knows how to build a pencil you've got the you know we've got this you it's turned into I think a second answer might B, I think there's two kinds of AI deployment, generative AI deployment.
One of them is there are places where it's very easy and obvious right now to see what you would do with this, which is basically software development, marketing, um, point solutions for many very boring, very specific enterprise use cases. And also basically people like us which are people who have kind of very open, very free form, very flexible jobs with many different things and people who are always looking for ways to optimize that. >> Yeah. >> And so you get people in Silicon Valley
who are like, you know, I spend all my diet time in DBT. I don't use Google anymore. You know, I've replaced my CRM with this. Um, and you kind of and then you obviously people who write if you're writing codes, this works really well. Well, if you're in marketing, you know, all these stories of big companies where, you know, they're making 300 assets where they would have made 30. Um, and then Accenture and Bane and McKenzie and Infosys and so on sitting and solving very specific problems
inside big companies. Then there's a whole bunch of other people who look at it and they're like, it's okay. and you go and look at the usage data and you see okay chat GPT has got 8 or 900 million weekly active users 5% of people are paying and then you go and look at all the survey data and you know it's very fragmented and inconsistent but it all sort of points to like something like 10 or 15% of people in the developed world are using this every day another 20 or
30% of people are using it every week and if you're the kind of person who is using this for hours every day. Ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can't think of anything to do with it this week or next week. >> Why is that? >> Yeah. >> Is it because it early? And it's not like a young people thing either, incidentally. And so, is that just because it's early? Is it because of the
error rates? Is it because you have to map it against what you do every day? And one of the the analogy I always used to use which isn't in the current presentation I've been used in previous presentations is imagine you're an accountant and you see software spreadsheets for the first time. This thing can do a month of work in 10 minutes almost literally. >> Yeah. You want to change you want to recalculate that DCF that 10ear DCF with a different discount rate. I've done it
error rates? Is it because you have to map it against what you do every day? And one of the the analogy I always used to use which isn't in the current presentation I've been used in previous presentations is imagine you're an accountant and you see software spreadsheets for the first time. This thing can do a month of work in 10 minutes almost literally. >> Yeah. You want to change you want to recalculate that DCF that 10ear DCF with a different discount rate. I've done it
before you finished asking me to. And that would have been like a day or two days or three days of work to recalculate all those numbers. Great. Now, imagine you're a lawyer and you see it. And you think, well, that's great. My accountant should see it. Maybe I'll use it next week when I'm making a table of my billable hours, but that's not what I do all day. And Excel is doesn't use do things that a lawyer can do every day. And I think those there's this other class of person that's like I'm
not sure what to do with this. And some of that is habit. Some of that is like realizing no instead of doing it that way I could do it this way. But that's also what products are. Like every entrepreneur who comes into a 16Z when I was there from 2014 to 2019 and I'm sure now like you could look at any company that comes in and say that's basically a database. That's basically a CRM. That's basically Oracle or Google Docs except that they've realized there's this problem or this workflow inside
this industry and worked out how to use a database or a CRM or basically concepts from 5, 10, 20 years ago and solve that problem for people in that industry and go in and sell it to them and work out how they can get it to use it. And so this is why, you know, you look look at data on this that you depending on how you count it, the typical big company today has 4 to 500 SAS apps in the US. 4 to 500 SAS applications and they're all basically doing something you could do in Oracle
or Excel or email. >> Yeah. >> Yeah. >> And that's the other side. I'm monologuing, I'm afraid, but like this is the other side of what is what do you do with these things? Do you just go to the bot and ask it to do a thing for you? Or does an enterprise salesperson come to your boss and sell you a thing that means now you press a button and it analyzes this process that you needed that you never realized you were even doing. >> Yeah. >> And I feel like that's I mean that's why
there are AI software companies, >> right? >> Really? Isn't that what they're doing? They're unbundling chat GBT just as the enterprise software company of 10 years ago was unbundling Oracle or Google or Excel. Do you have the view that you know what Excel did for for for accountants um you know we're sort of AI is now doing for for coders um and developers but hasn't quite figured out that sort of you know daily critical workflow for for other job positions and
so it's unclear for people who aren't developers you know why I should be using this for many many hours a day or >> I think there's a lot of people who don't have tasks that work very well with this. >> Yeah. And then there's a lot of people who need it to be wrapped in a product and a workflow and tooling and UX and someone to come and say, "Hey, have you realized you could do it with this?" Um I had this conversation with um in the summer with with with with Balagi
who's another a former A6Z person and he was making this point about validation that can you because these things still get stuff wrong and people in the valley often kind of handwave this away but you know there are questions that have specific answers where it needs to be the right answer or one of a limited set of right answers. Can you validate that mechanistically? Um, if not, is it efficient to validate it with people? So, you know, with the marketing use case, it's a lot more
efficient to get a machine to make you 200 pictures and then have a person look at them and pick 10 that are good than to have um people make 10 good images or 100 you even if you're going to make 500 images and pick a 100 that are good. That's a lot more efficient than having a person make 100 images. Um but on the other hand, if you're doing something like data entry and this I wrote something about this about um about open launch deep research open launch deep
research their whole marketing case is it go goes off and collects data about the mobile market. I used to be a mobile analyst. The numbers are all wrong. Their use case of look how useful this is. Their numbers are wrong. And in some cases they're wrong because they've literally transcribed the number incorrectly from the source. In other cases, it's wrong because they've used a source that they shouldn't have used. But like if I'd asked an intern to do it for me, then an intern would probably
have picked that. And to my the point about, you know, verification, if you're going to do data entry, if I'm going to ask a machine to copy 200 numbers out of 200 PDFs, and then I'm going to have to check all 200 of those numbers, I might as well just do it myself. >> Yeah. So you've got like a whole swirling matrix of how do you map this against existing problems? But the other side of it is how do you map this against new things that you couldn't have done before?
And this comes back to my my point about platform chips because you know you know I see people looking at chbt or looking at generative AI and saying well this is this is useless because it makes mistakes and I think that's kind of like looking at like an Apple 2 in the late '7s and saying could you use these to run banks to which your answer is no but that's kind of the wrong question >> right >> like could you build video edit professional video editing inside Netscape no but that's the wrong
question right >> and later yeah 20 years data you can but that meanwhile it does a whole bunch of other stuff and the same with mobile like can you can you use mobile to replace you know your you know your five screen professional programming rig no therefore it can't replace PCs well guess what 5 billion people have got a smartphone and seven or 800 million people have got a consumer PC so it kind of did but did a different thing and the point of this is like the new thing this
is you know the disruption framing you mentioned earlier the new thing is generally not very good or terrible at the stuff that was important to the old thing but it does something else >> right And a lot of the question is okay it may not be very good at doing there's a class of old task that generative AI is good at. There's also lot many more old tasks that generative AI is maybe not very good at. But then there's a whole bunch of other things that you would never have done before that
generative is really really good at and then how do you find those or think of those? And how much of that is the user thinking of it faced with a general purpose chatbot? How much of that is the entrepreneur saying, "Hey, I've just realized that there's this thing that I can do that you couldn't do before and here you are. I've given you a product with a button that will do it for you, >> right? >> And it's why there are software companies, >> right? And and on on mobile, you know,
some of the new use cases, you know, we're you know, getting in strangers cars, you know, we mentioned lift an Uber or sort of, you know, dating people you met via an app or um sort of um you know, lending your spare bedroom out um you know, etc. And and those were net new companies that that that you know, were built around those behaviors. And I think for AI there's still questions of you know what are those net new behaviors we're starting to see some in terms of you know people engaging and
talking with you know chat bots instead of humans or or um or in addition um and then there's a question of hey are these done by the uh model providers that that currently exist or are these done by you know net new companies both on you know sort of enterprise and consumer. >> Well this is always a question is how far up the stack does the new thing go? Um, and you know, I I was talking about this with another former former A6Z person who pointed out that like in the
the the mid '90s, um, people kind of argued that, well, you know, the operating system does all of it and the Windows apps are basically just kind of thin Win332 rappers. >> Yeah. And you know, office is basically just you know a thin wind32 wrapper like all the important stuff is being done by the OS whether it's you know the document management and printing and storage and display which all stuff that used to be done by apps like in on DOSs the apps had to do printing the apps had
to manage a display we moved to Windows like 90% of the stuff that the app used to do is now being done by Windows >> and so office is just like a thin wind32 wrapper and all the hard stuff has been being done by the OS and it turns out well that was again it's like frameworks are useful but that's not maybe not a useful way of thinking about what's going on and the same thing now like how much does this need single dedicated understanding of how that market it
works or what that market is and what you would do with that. Um I mean I remember when we were at A16Z there was an investment in a company called Everaw which is cloud um legal discovery in the cloud. >> Yeah. >> And so machine learning happens and so now they can do translation. Are they worried that lawyers are going to say well we don't need you guys anymore. we're just going to go and get a translate app and a sentiment analysis app from AWS. Like no, that's not how
law firms work. Law firms want to buy a thing that sells want to buy legal discovery software management. You know, they don't want to, you know, go and write their own by do API calls. I mean, very, very big law firms might, but you know, typical law firm isn't going to do that. People buy solutions, they don't buy technologies. And the same thing here, like how far up the stack do these models go? um how much can you turn things into um a widget? How much can you turn
things into an LM request? And how much know does it turn out that you need that dedicated UI? The funny thing is you can see this around Google because Google had this whole idea that everything would just be a Google query and Google would work out what the query was and guess what you know now you want this Google flights is not a Google query. you know they you a certain point and and one of the one of the interesting things about this and I think it's interesting to think about what a guey
is doing that some of what a guey is doing and the obvious thing that a gooey is doing is that it enables office to have 500 application 500 features and you can find them all at least it's you don't have to memorize keyboard commands you can now have effectively infinite features and you can just keep adding menus and dialog boxes and eventually you you run out of screen space for dialog boxes but like you can have hundreds of features without people needing to memorize
keyboard commands. But the other side of it is you're in that dialogue box or you're in that screen in that workflow in Workday or Salesforce or whatever the enterprise software is, whatever any software or or or the airline website or or Airbnb or whatever it is. And there aren't 600 buttons on the screen. There's seven buttons on the screen because a bunch of people at that company have sat down and has thought, what is it that the user should be asked here? What questions should we give
keyboard commands. But the other side of it is you're in that dialogue box or you're in that screen in that workflow in Workday or Salesforce or whatever the enterprise software is, whatever any software or or or the airline website or or Airbnb or whatever it is. And there aren't 600 buttons on the screen. There's seven buttons on the screen because a bunch of people at that company have sat down and has thought, what is it that the user should be asked here? What questions should we give
them? what choices should there be at this point in the flow? Um, and that reflects a lot of institutional knowledge and a lot of learning and a lot of testing and a lot of really careful thought about how this should work. And then you give somebody a raw prompt and you just say, "Okay, you just tell the thing how to do the thing and you're like, but you've kind of got to shut your eyes, screw your eyes up and think from first principles. How does this all of this work?" It's kind of I
them? what choices should there be at this point in the flow? Um, and that reflects a lot of institutional knowledge and a lot of learning and a lot of testing and a lot of really careful thought about how this should work. And then you give somebody a raw prompt and you just say, "Okay, you just tell the thing how to do the thing and you're like, but you've kind of got to shut your eyes, screw your eyes up and think from first principles. How does this all of this work?" It's kind of I
always used to talk about machine learning as giving you infinite interns. So, you know, imagine you've got a task and you've got an intern and the intern doesn't know what venture capital is. >> How helpful are they going to and like they and they don't know that companies publish quarterly reports and that we've got a Bloomberg account that lets us look up multiples and that then you should probably use um pitchbook for this data and rather than using Google.
This is my point about deep research like no you should use this source and not that source. Um, do you want to have to work that out from scratch or do you want a bunch of people who know a lot about this stuff to have spent 5 years working out what the choices should be on the screen for you to click on it? I mean, it's the old user interface saying the computer should never ask you a question that you should have to work out that it should know by itself. You go to a blank raw
chatbot screen, it's asking you literally everything. It's not just asking you one question. It's asking you absolutely everything about what is it is that you want and how you're going to work out what how to do it. >> The and so you know you're mentioning chat you right about how chatbt isn't sort of a product as much as chatbot is disguised as a as a product. I I am curious you know when we sort of look back at this sort of the you know platform shift do you think that there
will be another sort of iPhone sort ofesque or excelesque product that kind of defines the the the feature the sort of platform shift in a way that chat GPT won't or or or is it sort of that the world has to catch up to how to use chat GPT or or something like chat >> so both of these both of these can be true because there was a lot of like it took time to realize how you would use Google Maps and what you could do with Google and how you could use Instagram
and all of these products have evolved a huge amount over time. So some of it is like you grow towards realizing what you could do with this like you realize that's just a Google query now. You realize that you could just do it like that and you realize I spent you know hours doing this and I just realized oh I could actually just make a pivot table. >> Yeah. Um the other side of it is then but you're still then expecting people to work it out themselves from
first principles and you know it's kind of useful to have somebody really 100 a thousand 10,000 really clever people sitting and trying to work out what those things are and then showing it to you as a as a product. I think another side to this is like you know there were always these precursors so like there were lots of other things before Instagram. >> Yeah. You know, YouTube didn't start as YouTube. It started as video dating, I think. Um, there were lots of of
attempts to do online dating that all kind of worked until Tinder kind of pulled the whole thing inside out. And so, there were always lots of things, what's the phrase local maxima? In fact, this is where we were, particularly with the iPhone, um, before cuz I was working in mobile for the previous decade. Um, it didn't feel like we were waiting for a thing. It felt like it was kind of working like every year the networks got faster and the phones got better and it
got a little bit better every year and we had apps and we had app stores and we had 3G and we had cameras and stuff seemed to be you know every year was a bit better and then the iPhone arrives and it just you know just you know below the chart kind of you know you've got this line doing this and then there's a line that does that although remember also the iPhone took like two years before it worked because you know the price was wrong and the feature set was
wrong and the distribution model didn't quite work. Um and so yeah, you you know, you can think your you know, you can think everything's going well and then something comes along and you realize no, oh no, no, no, that's which is the same for Google, you know, like search was a thing before Google. It just wasn't very good. Um so there were lots of there was lots of social stuff before Facebook and you know that was the thing that that catalyzed it. So, you know, I just think deterministically
this whole thing is so early that it feels like of course there are going to be, you know, dozens, hundreds of new things. Otherwise, H&Z should just kind of shut down and give the money back to the LPS cuz the the foundation models will just do the whole thing. And like I don't think you're going to do that. At least I hope not. >> No, no, no. If we have any regrets from the last few years, it's it's it's not going bigger. I think we didn't fully appreciate how much specialization there
would be across uh sort of you know whether it's voice or image generation or or take any sort of subsector that there would be um you know net new companies created that would be better than the than the the the model providers that that there would be even multiple model providers that that or that in every category. Um you know one one thing we've always in the web two era we always bet on the category winner right and and the category winner would take mo most of the market but these
markets are so big um and the the the there's so much expertise and specialization that in that there one there can be winners in in every category it's not just sort of the the model providers take everything but that even in every category including the model providers there can be multiple winners in increasing you know specialization and and the the markets are just big enough to to contain multiple winners. >> I think that's right. And I think you know the categories themselves aren't
clear right? >> And you know many you know things you think this is a category and it turns out no it was actually that whole other thing and the categories kind of get unbundled and bundled and recombined in different ways. I mean, I remember I was a student in 1995 and um there I think I had like four or five different web browsers on my PC, web servers on my PC cuz I mean Tim Berners Le's original web browser had a web editor in it because he thought this
clear right? >> And you know many you know things you think this is a category and it turns out no it was actually that whole other thing and the categories kind of get unbundled and bundled and recombined in different ways. I mean, I remember I was a student in 1995 and um there I think I had like four or five different web browsers on my PC, web servers on my PC cuz I mean Tim Berners Le's original web browser had a web editor in it because he thought this
was kind of like a network drive and it was a sharing system and didn't realize not not really a publishing system. So you would have your web pages on your PC and you'd use leave your PC turned on and that would be how your colleagues would look at your word documents or your web pages. And so again, like we just don't know how and and I just kind of keep coming back to this point. I feel like most of the questions we're asking at the moment are probably the
wrong question. And picking up on on a strand within what you just said, though, the interesting one of the things I'm sort of thinking about a lot is looking at looking at OpenAI. Um because, you know, I'm I'm I'm sort of fascinated by disconnections and we've got this interesting disconnect now, which is that, you know, if you look at the benchmark scores, so you've got these general purpose benchmarks where the models are basically all the same and if you're Yes. If you're spending
wrong question. And picking up on on a strand within what you just said, though, the interesting one of the things I'm sort of thinking about a lot is looking at looking at OpenAI. Um because, you know, I'm I'm I'm sort of fascinated by disconnections and we've got this interesting disconnect now, which is that, you know, if you look at the benchmark scores, so you've got these general purpose benchmarks where the models are basically all the same and if you're Yes. If you're spending
hours a day in them, then you've got this opinion about, oh, I like Claude's tone of voice more than I like GBT and I like GBT 5.1 more than GBT 4.9 or whatever the hell it's called. If you're using this once a week, you really don't notice this stuff. And the benchmark scores are all roughly the same. And but the usage isn't. It's basically the only consu Claude has basically no consumer usage even though on the benchmark score it's the same. and then it's chat GBT
and then halfway down the chart it's um Meta and Google and the funny thing is you know that you read all the AI newsletters again then like Meta's lost they're out of the game they're dead Mark Zuckerberg is spending a billion dollars a researcher to get back in the game but from the consumer side well it's it's distribution and the interesting thing here is that you've got what I'm kind of circling around is if the model for a casual consumer user certainly is a commodity
and there's no network effects or winner takes all effects yet. There may those may emerge but we don't have them yet. And things like memory aren't network effects as stickiness but they can be copied. Um how is it that you compete? Do you just compete on being the recognized brand and adding more features and services and capabilities and people just don't switch away? Which is kind of what happened with Chrome for example. There's not a network effect for Chrome,
but it and it's not actually any better much. Maybe it's a bit better than Safari, but you know, you use Chrome because you use Chrome. Or is it that you get left behind on distribution or network effects that emerge somewhere else and meanwhile you don't have your own infrastructure. So I suppose what I what I'm getting at is like you've got these 8 or 900 million weekly active users, but you don't have but that feels very fragile because all you've really got is
the power of the default and the brand. You don't have a network effect. You don't really have feature lock in. You don't have a broader ecosystem. You also don't have your own infrastructure. So you don't control your cost base. You don't have a cost advantage. You get a bill every month from Satcha. Um, so you've kind of got to scramble as fast as you can in both of those directions to on the one side build product and build stuff that on top of the model, which is our earlier
conversation. Is it just the model? Yeah. >> Now, you've got to build stuff on top of the model in every direction. It's a browser. It's a social video app. It's an app platform. It's this. It's that. It's like, you know, the meme of the guy with the map with all the strings on it, you know. Um it's all of these things. We're going to build all of them yesterday. And then in parallel it's infrastructure like and you know we we do we've got to deal with OpenAI sorry
conversation. Is it just the model? Yeah. >> Now, you've got to build stuff on top of the model in every direction. It's a browser. It's a social video app. It's an app platform. It's this. It's that. It's like, you know, the meme of the guy with the map with all the strings on it, you know. Um it's all of these things. We're going to build all of them yesterday. And then in parallel it's infrastructure like and you know we we do we've got to deal with OpenAI sorry
deal with with Nvidia with with with Broadcom with AMD with Nvidia with Oracle and with petro dollars. Um because you're kind of scrambling to get from this amazing technical breakthrough and these 800 900 million wows to something that has like really sticky defensible sustainable business value and product value. >> Yeah. And and so as you're evaluating the the competitive landscape among the the hyperscalers, what are the the questions that you're ask that you think
are going to be most important in determining um you know who who's going to gain you know durable competitive advantages or or how this competitive is going to competition is going to play out. Well, this kind of comes back to your point about sustaining advantage and we talked about Google like if we think about the shift to particularly shift to mobile for meta this turned out to be transformative like it made the products way more useful. >> Yeah. >> Um for Google it turned out mobile
search is just search >> and maps changed probably and YouTube changed a bit but basically for Google search Google search is search and the web web search is just mean means more people doing more search more more of the time. Yeah. >> Um, and the default view now would seem to be, well, Gemini is as good as anybody else. Next week, like the new model, I haven't looked at the benchmarks for GPT 5.1, which is out today. Is it better than Gemini? Probably. Will it still be better next month? No.
So, that's a given. Like, you've got a Frontier model. Fine. What does that cost? It costs you, pick a number, $250 billion a year, $100 billion a year. What's this? This is our earlier conversation about capex. Okay. So, Google can pay that because they've got the money. They've got they've got the cash rate from everything else. And so, you do that and your existing products get you optimize search. You optimize your ad business. You build, you know, you build new experiences. Maybe you
invent the new the iPhone of AI. Maybe there is no iPhone of AI. Maybe someone else does it and you do an Android and just copy it. Um, so fine, it's the new mobile. We'll just carry on. Search is search. AI is AI. We'll do the new thing. We'll make it a feature. We'll just carry on doing it. Um, for meta, it feels like there are bigger questions on what this means for search. Um, or what it means for content and social and experience and recommendation, which makes it all that
more imperative that they have their own models just as it is for Google. Um, for Amazon. Okay. Well, on the one side, it's commodity infra and we'll sell it as commodity infra. And on the other side, and maybe can maybe step maybe step back. If you're not a hyperscaler, if you're a web publisher, a marketer, a brand, an advertiser, a media company, you could make a list of questions, but like you don't even know what the questions are right now. >> What is this? What happens if I ask a
chatbot a thing instead of asking Google? Even if it's Google from from Google's point of view, well, I'll ask Google's chatbot. It's fine. But as a marketer, what does that mean? What happens if I ask for a recipe and the LLM just gives me the answer? What does that mean if my business is having recipes? >> Yeah. >> Do you have a kind of split between, and this is also an Amazon question, how does a purchasing decision happen? How does this decision to buy a thing that I
didn't know existed before happen? What happens if I wave my phone at my living room and say, "What should I buy? Where does that take me?" In ways that it wouldn't have taken me in the past. So there's a lot of questions further downstream and that goes upstream to Meta and to some extent for Google. It's a much bigger question in the long term for Amazon. Do do LLM mean that Amazon can finally do really good at scale recommendation and discovery and suggestion in ways that it couldn't
really do in the past um because of this kind of pure commodity retailing model that it has. Um, Apple Apple's sort of off on one side. You know, interestingly, they produced this incredibly compelling vision of what Siri should be two years ago. It just turned out that they couldn't make it. Interestingly, nobody else could have made it either. You go back and watch the Siri demo that they gave and you think, okay, so we've got multimodal instantaneous ondevice tool using
agentic multiplatform e-commerce in real time with no prompt injection problems and zero error rates. Well, that sounds good. I mean, has anyone got that working? Like, no. Open eye open Google and Open AI don't have that working. I Google I don't think Google or OpenAI could deliver the Siri demo that Apple gave two years ago. I mean, they could probably do the demo, but they couldn't like consistently reliably make it work. I mean, that that demo that product
isn't in Android today. Um and Apple I mean Apple to me has the most kind of intellectually interesting question which is um so I saw Craig Craig Federigi make this point which is like we don't have our own chatbot fine we also don't have YouTube or Uber what what explain why that is different which is a harder question to answer than it sounds like um and of course the answer is if this actually fundamentally changed the nature of computing then it's a problem if it's just a service
that you use like Google then that's not a problem. Um which is kind of the point about about you know where does Siri go. But the interesting counter example here would be to think about what happened to Microsoft in the 2000s which is the entire dev environment gets away from them and no one builds Windows apps after like 2001 or something but you need to use the internet. To use the internet you need a PC and what PC are you going to buy? Well like Apple's like
not really a player at that time and just getting back into the game. Linux is obviously not an option for any normal person. Um, so you buy Windows PC. So basically Microsoft loses the platform war and sells an order of magnitude more PCs like well not selling them but order an order of magnitude more Windows PCs as a result of this thing that Microsoft lost. Um, and then it takes until mobile that like then they lose the device as well as a development development environment. So
here's this kind of question is if all the new stuff is built on AI and I'm accessing an app that I download from the app store, to what extent is this a problem for Apple? And what would have to you would need a much more fundamental shift in what it was that was happening for that to be a problem for Apple. And even if you take like the you know not the like the full like the rapture arrives and we all just kind of go and live sleep in pods like the guys in up. Um not up. Um yes what
is it? The one with the robot that's capturing the trash. Which one is that? >> Wally. >> Wall-ally. Wally. Yeah. You know the guys in the pods in that movie. Maybe we'll be the people. Maybe we'll be like that. In which case fine. Um, but like there's a sort of a midc case which is like the whole nature of software changes and there are no apps anymore and you just go and ask the LLM a thing. Fine. What is the device on which you ask the LLM a thing? Well, it's probably
is it? The one with the robot that's capturing the trash. Which one is that? >> Wally. >> Wall-ally. Wally. Yeah. You know the guys in the pods in that movie. Maybe we'll be the people. Maybe we'll be like that. In which case fine. Um, but like there's a sort of a midc case which is like the whole nature of software changes and there are no apps anymore and you just go and ask the LLM a thing. Fine. What is the device on which you ask the LLM a thing? Well, it's probably
going to have a nice big color screen and it's probably going to have like a one day battery life. Probably needs a microphone. Probably a good camera.
Kind of sounds like an iPhone. >> Yeah. Am I going to buy the one that's a tenth of the price and just use the LLM on it? No, because I'll still want the good camera and this good screen and the good battery life. So, it's not there's a bunch of kind of interesting strategic questions when you start poking away. Well, what does this mean for Amazon? Those are completely different questions to what does it mean for Google or what does it mean for Apple? What does it
mean to Facebook or what does it mean to Salesforce? What does it mean to you know Uber? And then right back to what we were saying at the beginning of this conversation you know what does this mean for Uber? Well their efficients get operations get X% more efficient and now the for detection works and you know okay maybe they're autonomous cars different conversation but presume no autonomous cars. That's a whole other conversation. Otherwise as Uber what does this change? Well
not a huge amount. I want to sort of zoom out a little bit. This help framing the um >> so you've been doing these presentations for a while now. You, you know, you bumped them up to two times um because there's so much is changing. Um and and one of the things you do in each presentation is is you're famous for asking, you know, really great questions and chronicling what what are the important questions to to be asking. I'm I'm curious as you reflect, you know,
maybe post uh you know, CHGBT in 2022 or GBT3 rather. um the questions you were asking then and you reflect on to now uh to what extent uh do we have some direction on some of those questions or to what extent are they the same questions or or new and and and different questions or what is sort of your you know if I woke up on a in a coma after reading your you know your original presentation let's say you the one after GPT uh 3 launch came out um and then seeing this one now what were
the sort of most surprising things or things that we we we learned that updated those questions. >> So I think we have a lot of new questions this year. So I feel like you know you could make a list of as it might be half a dozen questions in spring of 23 like open source China Nvidia does scaling continue um what happens to images um does how long does open AI's lead remain and those questions didn't really change in 23 and 24 and most of those questions are kind of still there like the Nvidia
question hasn't really changed you know the answer on ch the answer on you know will how models will there be? The answer is okay, there's going to be anybody who can spend a couple of hundred can spend a couple of billion dollars can have a frontier model. That was pretty obvious in early 23. Um it took a while for everyone to understand that. And big models and small models, will we have small models running on devices? No, because the small models the capabilities keep moving too fast
for the small models to shrink the small model onto the device. But those questions kind of didn't change for two two and a half years. I think we now have I think a bunch of more product strategy questions as you see real consumer adoption and open AI and Google building stuff in different directions, Amazon going in different directions, Apple trying and obviously failing and then then trying again to do stuff. There's some sense of like there is something more going on in the industry
than just well let's just build another model and spend more money. >> Yeah, >> there's more questions and more decisions. Now there's also more questions outside of tech in certainly on like the retail media side of um how do you start thinking about what you would do with this? And again, you know, classic framing in my deck is like step one is you make it a feature and you absorb it and you do the obvious stuff. Step two is you do new stuff. Step three is maybe someone will come
and pull the whole industry inside out and completely redefine the question. And so you could kind of do like an imagine if here of like step one is um you know you're you're a manager at a Walmart in the Bay Area or DC or like whatever it is. Step one is find me that metric. Step two is build me a dashboard. Step three is it's Black Friday and I'm running managing a Walmart outside of DC. What should I be worried about? like and that might be the wrong one but
it's like you know step one for Amazon is you bought light bulbs so here's so you bought bubble wrap so here's some packing tape but what Amazon should actually be doing is saying hm say this person is moving home we'll show them a home insurance ad which is something that Amazon's correlation systems wouldn't get because they wouldn't have that in their purchasing data and we're still very much at the like we're still starting to we're still on the step one of that but thinking much
more. What would the step two, step three be? What would new revenue be for this other than just like simple dumb automation? What would new things that we would build with this be? Um, where would this actually like might I might might actually kind of redefine or change what the market might look like? Um, and that's obviously a big question for anyone in content business. >> Yeah. >> You know, what does it mean if I can just go and ask an LLM this question?
more. What would the step two, step three be? What would new revenue be for this other than just like simple dumb automation? What would new things that we would build with this be? Um, where would this actually like might I might might actually kind of redefine or change what the market might look like? Um, and that's obviously a big question for anyone in content business. >> Yeah. >> You know, what does it mean if I can just go and ask an LLM this question?
What kinds of content were predicated on Google rooting that question to you? And what kind of questions what kind of content isn't really that question? Like do I want a Bologn recipe or do I want to hear Stanley Tucci talking about cooking in Italy? Like do I just want the do I want that skew or do I want to work out which product I should buy? Which is Amazon is great at getting you the skew. terrible at telling you what ski you want. Um, do I just want the
slide deck or do I want to spend a week talking to a bunch of partners from Bane about how I could think about doing this? Do I just want money or do I want to work with A16Z's um, you know, operating groups? >> Like what is it that I'm doing here? And I think the the LLM is starting thing is starting to crystallize that question in lots of different ways. Yeah, >> like what am I actually trying to do here? Do I just want a thing that a computer can now answer for me or do I
want something else that isn't? Because the LMS can do a bunch of stuff that computers couldn't do before, >> right? >> Is that thing that the computer couldn't do before my business? >> Yeah. >> Or am I actually doing something else? >> We're we're about to figure out what is the in a much more granular way what what is the true job to be done for for for many many of these uh >> Yeah. And you know going back to the internet there was you know the sort of observation about newspapers is that
newspapers looked on the internet and they talked about you know expertise and curation and journalism and everything else and didn't really say well we're a light manufacturing company and a local distribution and trucking company. >> Yeah. >> And that was the bit that was the problem and until the internet arrived like that wasn't a conversation you thought about and then the internet suddenly makes that clear and suddenly creates an unbundling that didn't exist
before. And so there will be those kinds of like you didn't realize you were that before until an LLM comes along and points to someone comes along with an LLM and says I can use this to do this thing that you didn't really realize was the basis of your defensibility or the basis of your profitability. I mean it's like the you know the the the joke about you know US health insurance that like the basis of US health insurance profitability is making it really really
before. And so there will be those kinds of like you didn't realize you were that before until an LLM comes along and points to someone comes along with an LLM and says I can use this to do this thing that you didn't really realize was the basis of your defensibility or the basis of your profitability. I mean it's like the you know the the the joke about you know US health insurance that like the basis of US health insurance profitability is making it really really
boring and difficult and timeconuming. That's where the profits come from. Maybe it isn't. I don't I don't know that industry, but for the sake of argument, say that's that's your defensibility. Well, an LLM removes boring time-consuming mind-numbing tasks. >> Yeah. >> So, what industries are protected by having that? And they didn't realize that. And these, you know, it's like you could have asked these questions about the internet in the mid '90s or about mobile a decade later. And generally,
you'd have half of the questions you'd have asked would have been the wrong questions in hindsight. I mean, I remember as a as a baby analyst in 2000, everyone kept saying, "What's the killer use case for 3G? What's a good use case for 3G?" And it turned out that having the internet in your pocket everywhere was the use case for 3G. >> But that wasn't the question that people were asking. And I'm sure that will be the thing now is there's so much that we will that
will happen and get built where you go and you realize, oh, that's how you would do this. you can turn it into that. >> Yeah. >> And I'm sure you've had this experience seeing entrepreneurs. You you know, you get every now and then they come in and they pitch the thing. You're like, "Oh, okay. You can turn it into that. It didn't I didn't realize it was that." >> Yeah. No, 100%. My my last question to get you out of here is um if if we're talking two or three years from now or
you you're doing a presentation, you say, "Oh, this is actually bigger than the internet or may maybe this is like like computing." Um what would need to be true? What what would need to happen? What what would uh what would evolve our thinking? >> I mean I I kind of you know sort of come back to my point about you know the Jews and Christians like the Messiah came nothing happened. Um, we forget I mean there's maybe two two ways very brief ways to think about
this. One of them is I think we forget how enormous the iPhone was and how enormous the internet was. And you can still find people in tech who claim that smartphones aren't a big deal. And this was the basis of people complaining about me like this idiot. He thinks like generative AI is big as those silly phone things. Like come on. I think another answer would be like I don't want to get into the argument about you know what is the grace rating capability and benchmarks and and all
this. One of them is I think we forget how enormous the iPhone was and how enormous the internet was. And you can still find people in tech who claim that smartphones aren't a big deal. And this was the basis of people complaining about me like this idiot. He thinks like generative AI is big as those silly phone things. Like come on. I think another answer would be like I don't want to get into the argument about you know what is the grace rating capability and benchmarks and and all
you know you see lots of 5hour long podcasts of people talking about this stuff but the stuff we have now is not a replacement for an actual person outside of some very narrow and very tightly constrained guard rails which is why you know Demis's point that it's absurd to say that we have PhD level capabilities now. Um
what we would have to be seeing something that would really shift our perception of the capability of this stuff. >> Yeah. >> So that it's actually a person as opposed to it can kind of do these people like things really well sometimes but not other times. And it's a, you know, it's a very tough conceptual kind of thing to think about because, you know, I'm I'm deliberate. I'm I'm conscious I'm not giving you a falsifiable answer. But I'm not sure what a falsifiable answer would be to
that. When would you know whether this was AGI? You know, it's the Larry Tesla line. AI is whatever doesn't work yet. As people, as soon as people say it works, people say, "Well, that's just not AI. That's just software." It's a, you know, it's a it's an and it becomes like a kind of a slightly drunk philosoph philosophy grad student kind of conversation as much as it is a technology conversation. Like what would it have you ever considered, Eric, that >> maybe we're not either.
that. When would you know whether this was AGI? You know, it's the Larry Tesla line. AI is whatever doesn't work yet. As people, as soon as people say it works, people say, "Well, that's just not AI. That's just software." It's a, you know, it's a it's an and it becomes like a kind of a slightly drunk philosoph philosophy grad student kind of conversation as much as it is a technology conversation. Like what would it have you ever considered, Eric, that >> maybe we're not either.
>> That's a thought. It's I I all I can say to give a tangible answer to this question is what we have right now isn't that. Will it grow to that? We don't know. You may believe it will. I can't tell you that you're wrong. We'll just have to find out. >> I think that's a good place to to to wrap. The the presentation is AI use the world. We we'll link to it. It's fantastic. Benedict, thanks so much for coming on the podcast to discuss it. >> Sure. Thanks a lot. Heat. Heat.
Loading video analysis...