LongCut logo

Where does consumer AI stand at the end of 2025?

By a16z

Summary

## Key takeaways - **ChatGPT Dominates Consumer AI**: ChatGPT is currently in the lead by far at 800 to 900 million weekly active users. Gemini's at an estimated 35% of their scale on web and about 40% on mobile, and everyone else significantly trails this. [01:23], [01:31] - **Gemini Accelerating Past ChatGPT**: Gemini is now growing desktop users 155% year-over-year, which is actually accelerating even as they reach more scale. ChatGPT is only growing 23% year-over-year. [01:51], [02:00] - **Image/Video Models Go Viral**: Their most viral models this year, at least among consumers, were in image and video: for OpenAI, the ChatGPT4 image, the Ghibli moment, and Sora 2; for Google, Veo V3, Veo3.1, and Nano Banana which went insanely viral. [02:49], [03:01] - **Product Nuances Drive Engagement**: When you open Gemini, it has a popup says 'We got nano banana. Would you like to do something with it?' and a little pane where you have to type something. GPT has a very TikTok like style of trending themes that you might want to generate and you click. [14:20], [14:42] - **Social Features Struggle in AI**: OpenAI's social features like group chats stop at end count of two to three people planning something in a help me way. Sora 2 successful as creator tool but not as social app consumption, more like CapCut. [18:00], [19:02] - **Claude Owns Technical Users**: Anthropic almost specialize within consumer owning different verticals like the hyper technical user. Claude launched artifacts and skills for workflows, best at file creation like slide decks, but needs to dumb down for mainstream. [02:03], [21:31]

Topics Covered

  • Gemini Accelerates Past ChatGPT
  • Image Models Crack Reasoning
  • AI Social Fails Status Game
  • Labs Excel Incremental, Not Opinionated

Full Transcript

For most the year, less than 10% of chatbt users even visited another one of the big LLM providers.

>> When you open Gemini, it has a popup says, "We got nano banana. Would you

like to do something with it?" The

little pain where you have to type something.

>> Yeah.

>> I don't I don't know what to do.

>> These are product nuances that I I think makes people actually take the first step.

>> The models have gotten to the level of quality that you can build a real scalable app on top of them. And so the hope is 2026 will be a huge year for consumer builders.

Today we're talking about who won consumer AI in 2025. This was arguably the year that we saw the big model providers OpenAI and Google most out of everyone make a major push of their own

into consumer. Both in terms of new

into consumer. Both in terms of new models they release but also in terms of new products, features and interfaces that target the mainstream user. Um, you

might wonder why does it matter who is in the lead here? There's some early signs that the general LLM assistant space might be trending towards winner

take all or at least winner take most.

So only 9% of consumers are paying for more than one out of the group of chatbt, Gemini, Claude and Cursor. And

for most of the year, less than 10% of ChateBT users even visited another one of the big LLM providers like Gemini. If

we had to call it now, chatbt is currently in the lead by far at 800 to 900 million weekly active users.

Gemini's at an estimated 35% of their scale on web and about 40% on mobile.

And everyone else significantly trails this. So Claude, Grock, Perplexity are

this. So Claude, Grock, Perplexity are all about 8 to 10% of the usage. But

especially in the last 3 to 6 months, things are changing very quickly with the launch of new viral models like Nano Banana. Gemini is now growing desktop

Banana. Gemini is now growing desktop users 155% year-over-year, which is actually accelerating even as they reach more scale, which is pretty crazy to

see. And ChatCBT is only growing 23%

see. And ChatCBT is only growing 23% year-over-year. And we're starting to

year-over-year. And we're starting to see players like Anthropic almost specialize within consumer owning different verticals like the hyper techchnical user. So today we've brought

techchnical user. So today we've brought together the A16Z consumer team to recap what we saw this year from the big model companies and consumer and also to predict what might be ahead of us in 2026.

>> Cool. Well, thank you Olivia. It's it's

been a super fun year. If we kind of wind the timeline back to last January, maybe we should start with what we saw, launches, products, what worked, what didn't. So Justine, tell us what you saw

didn't. So Justine, tell us what you saw this year. Openai, Google, what are you

this year. Openai, Google, what are you paying attention to? What have you changed your mind on?

>> Yeah, those two in particular had a ton of consumer launches like Olivia mentioned. Um, from a model perspective,

mentioned. Um, from a model perspective, I would say I would argue their most viral models this year, at least among consumers, were in image and video. So

for OpenAI, it was the chat GBT4 image, the Giblly moment, which is crazy that that was this year. It's insane that this is this year. It feels like it was years ago. Um, and then Sora obviously

years ago. Um, and then Sora obviously Sora 2. And then for Google, uh, it's

Sora 2. And then for Google, uh, it's VO, V3, and VO3.1. Um and then Nano Banana and Nano Banana Pro in image models which went insanely viral

probably comparable to if not beyond um the the Giblly moment for OpenAI. I

think in terms of the product layer what we saw was OpenAI tended to keep more things in the chat GBT interface. So

like pulse group chats shopping research tasks all of these features launched inside chat GBT as the core. Um the

exception there is obviously Sora as a standalone video app whereas Google tended to launch more things as standalone products. So they did ship a

standalone products. So they did ship a lot through like Google AI Studio um and Google Labs and Gemini and the plethora of Google surfaces there are to launch a

product. Um, but they would also ship

product. Um, but they would also ship things as standalone websites that you could go to and visit, which basically allowed for a more custom interface for each type of product, not just the kind of chat entry, chat exit, or image video

exit.

>> Well, so Justine, I have a question for you on that. So, it felt like 18 months ago we were talking about midjourney and most of the multimodal models were defined by aesthetics and realism. Is

that still true? Like what changed this year? Yeah, I think there's definitely

year? Yeah, I think there's definitely different styles still and I think midjourney when you talk to people really deep in image and video, it still kind of stands apart for this like aesthetic sensibility that a lot of the

models don't have if you don't know how to prompt for it. But I would say this year in particular, we made a lot more strides on realism and also on reasoning

within both image and video. Like all of the little details that make an image or a video actually seem real. For example,

if you have a person walking and talking, the people in the cars in the background, if they're on a street, should be moving in the correct direction, like they shouldn't be morphing and looking strange. Um, an

image, we were able to, you know, have multiple input images and text and sort of reason across all of those uploads to create like a cohesive design or something like that. Um, which was not something we saw happening last year for

sure.

>> Yeah. I remember when we were excited about having a letter show up correctly in images and now we have insane infographics.

>> Yes, >> we can just put up amazing YouTube video and say give me an image image that explains us. That's incredibly

explains us. That's incredibly different.

>> Nano Banana Pro can even generate like market maps like I generate generated a market map. It's incredible.

market map. It's incredible.

>> And it will like it either has or will go do the web research within the image model which is crazy to get the correct list of companies and then pull their photos which is insane. I know there's there's one benchmark left that the

reasoning image models have not cracked.

Not I tested GT image 1.5 yesterday.

They sometimes struggle with like both reasoning and multi-step reasoning. So

what I've been testing is you upload a picture of a monopoly board and you say remove the names of all the properties and replace them with names of AI labs

and startups. And um GPT image 1.5 is

and startups. And um GPT image 1.5 is actually the closest, but it's very hard for them to do all of those steps. Remove it, come up with the new names, put all of the

new names in the correct places, make sure there's not overlaps or one thing you mentioned three times and another big player you never mention. So there's

still some room to go on the imag. It's

interesting um that especially from the image model from ChachiPT where you can actually see preser perseverance of like it it carries a character >> over into multiple image generation the

same style >> and I thought that was like oh like this is actually very interesting where storyboarding >> totally wants want makes you want to generate more >> you know for me it felt like the most

underhyped aspect of Nano Banana was the integration with search because it feels like there's realism you know which is physics and sort of other things that feel, you know, like we're on Kenny

Valley. There is reasoning, which is

Valley. There is reasoning, which is apply modifications that are adherent to what the user asked for. But then

there's also sort of accuracy. And for

me, a good example of this is product photography. If you say, "Hey, generate

photography. If you say, "Hey, generate a photo of this album cover or a historically accurate photo of this moment in time," you have to actually have the search integration. And that

was sort of non-intuitive, but it's actually very useful.

>> Totally. Yeah. It's kind of like the VO3 moment when I don't think it was intuitive to people that video would be cracked necessarily by bringing audio together with video in the same place.

And that ended up being the thing that made AI video go viral.

>> Like since V3 and and now Sora maybe dominates, but like since V3, my social feeds have been like full of >> really realistic looking about oneif of my feeds are AI generated.

>> Amazing. Yeah.

>> What do you guys do? There's so many launches this year and many of them went well like VO and Nano. Um, what do you think is underhyped or products that you think didn't get enough attention?

Brian, >> it's a good question.

>> I think uh underhyped pulse of the world is probably still underhyped and you know we're talking about open AAI Google which to me fall under productivity

category. So if you actually think about

category. So if you actually think about if you go to app store today, top five out of top 10 productivity apps are all Google. It's insane. And and Tachibb is

Google. It's insane. And and Tachibb is number one. So we're talking about a

number one. So we're talking about a productivity category where it's like helps you do things. And I feel like a lot of people are trying this from a different angle like how do I actually

ingest your data or your schedule your email to make it more helpful and give more proactive you notification to you.

I think a lot of people are working on it. Given the frequency of people using

it. Given the frequency of people using chat GPT, which I think is what 25 uh times a week, >> pretty good. Pretty good. Three to four times a day. It feels like it's a really

good position to actually give you proactive nudges and summary and help your life in general. So feel like um the everything app was always this myth

in in in the western world. I think uh OpenAI is trying to move in that direction where it's ingesting enough.

People are going there enough >> to start giving really useful proactive nudges and I think that's a space that I'm excited about. It's interesting.

>> But are you a DAOU? Because

>> I am not a DAU. A

>> pulse not not a false.

>> Similarly, I tried pulse for a while and have kind of largely turned off of it.

But I would agree with you that I feel like pulse and a couple other examples that OpenAI launched this year are kind of new primitives or ideas that feel underhyped but because the execution is a little off.

>> I think it's the execution >> the usage is off. Another example that I would give which is similarly like personal context would be their connectors. So now you can and you can

connectors. So now you can and you can do this on cloud as well. You can

connect your calendar, your email, your documents. And so hypothetically, you

documents. And so hypothetically, you could say to chatbt, you know, read all of my memos over the past 6 months and like summarize what's most interesting, least interesting. I think when that

least interesting. I think when that works, it's really exciting. I have

found it to be a little bit unreliable so far, but I think as the models get better, they have a real chance to kind of own the proumer workspace if they get that right.

>> Proumer is perfect category because like we we talk about it sometimes, but 99% of people don't run their lives on calendar. Yeah, we do. But uh but so

calendar. Yeah, we do. But uh but so that that's when I'm thinking about the actual average frequency of using Chachi PT and if it's look if it's 24 times a

week that's pretty good place to start.

>> Yeah Olivia I feel like you're the ultimate power user. You know what are you still using? What's your stack?

>> It's a great question. From all of the larger model companies actually I would have to say the thing that I'm still using the most and was maybe the most impressed by this year was the Perplexity Comet browser. And I don't

and was not using perplexity as my core general LLM assistant. I use chatbt and claude much more. But I think they really executed on it in a first class

way in terms of both the agentic model within the browser, but also perhaps more importantly all the workflows that you can set up that allow you to basically run the same task over and

over either at a preset time or when you trigger it on a certain web page. So

that to me was a really exciting watch.

And if you look at the data, like the spike at launch and the sustained traffic for Comet was actually much higher than for ChateBT's own browser launch, Atlas, which is kind of crazy

given how much more distribution ChatBT has than Perplexity. Um, but I think they also launched an email assistant this year, Perplexity did, and they made a couple acquisitions of really strong

agentic startups. And so what I would

agentic startups. And so what I would love to see from them next year is like more of these dedicated proumer interfaces. cuz I feel like that would

interfaces. cuz I feel like that would be an awesome direction for them to kind of double down in.

>> They do feel like the startup that has the biggest breadth of ambition, you know, alongside the labs and sort of big tech. Like it's very very impressive

tech. Like it's very very impressive just the number of things they've shipped this year.

>> Yes, definitely.

>> What um you know, one one thing I wanted to ask you Justine was sort of Gemini feels like it's having a real moment because of all the image and video models. Do you think it can overtake

models. Do you think it can overtake Chat GPT? Is there truly that much

Chat GPT? Is there truly that much demand for these types of models?

>> I think yeah. So, what we what I've seen basically is um there is always nearly infinite demand for like the best-in-class image or video model. Um

because then you have a mix of tons of different people seeing it and wanting to use it. You have like if you're using it professionally, if you're marketing or in entertainment or storyboarding or whatever, um you always want to be using

what's at the forefront of the field.

And so, you're totally fine to go somewhere other than chat GPT and Sora to to get access to VO. um even if you're an everyday consumer, so many new viral trends are created around new

capabilities of the best-in-class image and video models. And so that ends up driving users into different products that they may have never tried before.

Like you might be downloading the Gemini app or accidentally ending up on Google AI Studio, which I know they're trying to make be more for developers to use Nano Banana Pro, which a lot of users I

think experienced in the past couple of months.

>> Yeah. The interesting thing about Gemini to me is like hypothetically they benefit from the massive Google distribution advantage. Like if you look

distribution advantage. Like if you look at Android, Gemini is at like 50% of Chat GBT scale on mobile, whereas on iOS it's like 17%. So like clearly something

is working there. They launched a little Gemini widget within Chrome recently that encourages you to use it. They're

launching it within Google Docs and Gmail and other things. But I think that most the average person is still just using one AI product and SHABT is like

the Kleenex of AI. Like it is the brand that has become Exactly.

>> And so I think that Gemini still has a pretty big hurdle to overcome just in terms of that.

>> Um but if they keep doing what they're doing on these amazing um viral consumer creative tool launches and model launches, like they could get there next year. I was thinking about this it's uh

year. I was thinking about this it's uh really interesting when you look at >> Gemini which is everywhere.

>> Yeah.

>> But yet nowhere to some extent right you don't like you know when you look at the actual usage people still think of the Kleenex um and they go to >> chatbt but the interesting thing also is

on the product sensibility. So this

morning I had like two panes open >> open AI's image model and um and Google's uh Gemini and basically use a image functionality. When you open

image functionality. When you open Gemini >> Mhm.

>> it's a blank screen.

>> It has a popup says we got nano banana.

Would you like to do something with it >> and a little pane where you have to type something?

>> Yeah.

>> I don't I don't know what to do.

>> Yeah. GPT. You go in and it has a very Tik Tok like style of like here's a trending themes that you might want to generate and you click on I want a sketch pen or whatever and then just like use one other picture and it

creates something amazing and then it says would you like a holiday card?

Would you like a blah blah blah blah.

>> These are product nuances that I I think makes people actually take the first step >> to generate it and then once you have it you have character consistency so you keep going right. So that's interesting

in that I think OpenAI and Chachib team has proven that there's deeper product sensibility.

>> Y >> but then I this is a funny thing maybe maybe a little non-coosher thing to say but you know I I worked at Snap. So when

you look at Meta versus Snap famously uh Evans Spiegel was chief product officer of Meta.

>> Yeah. Yeah.

>> Um I wonder if there's a world where the chatbt team >> that innovates on the product front again and again. Google with

distribution looks at him like that's cool. Let's just let's just integrate it

cool. Let's just let's just integrate it and keep going >> and actually play that game.

>> The interesting thing there is that images pane just launched yesterday when we're filming this in chatbot brand new and it took them like they had image models for years and it took them that

long to come up with a separate relatively basic interface for generating images. I would almost argue

generating images. I would almost argue the application layer companies like the the Creas, the Hedraas, the Higsfields of the world popularized that template

format and did it first needed it better and they are chatbt product people and then maybe the chatbt >> supply chain of product ideas. Exactly.

Always. Always. Well, well, maybe going in a slightly different direction, VK, I'm very curious for your take on OpenAI's social features because it does feel like that's something that you really have to get product execution right on, but also network design. You

know, there's some efforts around Sora, too. We should talk about that. There's

too. We should talk about that. There's

also group chats within chat GPT.

>> You're our sort of social guy or have been historically bullish, bearish.

Where's your head at?

>> Uh bearish for now.

>> Okay.

>> Uh and the reason to me is twofold. Uh

historically, we we look at sort of it's funny. I I look at products based on

funny. I I look at products based on what I call inception theory. You go

like three to four layers down to figure out what the oneliner is, which is like I want my dad to love me. And so, you know, when they think of a product for you or for the world, >> that's for me as well as for a walk.

>> Yes. And so I I looked at some of the you know um products like like Chachi PT ultimately when you peel the onion five times I think essentially is help me be better like help me get that information

help me be more productive help me be more efficient and then when I think about social features meta Instagram what have you or even Tik Tok the two layers of information or the you know

emotion that is trying to address to me is for Tik Tok entertain me I want my clown entertain me.

>> Yeah. And then the other layer is I'm lonely. I want to be seen. I want to

lonely. I want to be seen. I want to connect with people.

>> And to me, these are pretty two different parallels in the product direction.

>> And OpenAI's product is incredible. It's

magic. It's amazing. But it's ultimately a see me or or help me category, >> which essentially is why it's number one in productivity category.

>> Yeah.

>> Now we're trying to take this and shove it in people's life and say, "Guys, connect connect better." and like actually >> feel like you're being seen. And even

the group chat function, which I love, it'll be so good to plan a trip and like actually have that common pain.

>> Um, but I think it still stops at probably end count of two to three people planning something in a help me way. Yeah.

way. Yeah.

>> Versus, oh, I feel like I understand a niche so much better because I I've sort of done that. So largely over time I think that's the reason of that division

>> but that is not to say you can build a separate product that completely sort of addresses that >> I think Sora so we talked about group chat Sora 2 was the other big I think social push this year from all the

consumer AI >> which was basically like a Tik Tok feed but all AI generated video and you can make cameos of your friends >> the cameos was a very good bet. It was a strong bet.

>> Yeah. And I but I think what we've seen is like in the retention data and how we're seeing it used is it was massively successful as a creator tool like now my

feed is probably twothirds AI slop if not more and I over 50% of it is now Sora whereas before it was like all VO and some cling >> um but it has not been as successful as

like a social app >> consumption. Uh yeah, people are like a

>> consumption. Uh yeah, people are like a small number of creators are creating a ton of content and then bringing it out to like Tik Tok, Instagram X, Reddit

where it's going massively viral, but it doesn't seem like there's a as much um consumption happening in the app, as much remixing, as much commenting, especially as there was initially.

>> You know, in a funny way, the way I think about it is like Sora's competition or analogy isn't actually Tik Tok.

>> Mhm.

>> It's actually Capka.

>> It's like a funny way. is almost like a creative tool. Interesting.

creative tool. Interesting.

>> Yes.

>> Olivia, what's your >> I think well I was going to say like I think it goes back to your earlier point which is like the >> kind of motion that drives social apps is both these like positive and negative

feelings of like >> oh I'm publishing this thing of myself that's kind of sensitive or that I want people to think it's this or that or this other thing. And so that's kind of what drives participation on the app.

>> The status game.

>> Exactly. The status game. And when it's AI generated content and people know it's not real, like a real representation of you as a human being, the status game is lost a little bit.

>> Absolutely lost.

>> Yeah. I think the status game comes then with um can you prompt something very cool, but that's a different type of product. And that's why I think it goes

product. And that's why I think it goes viral on like Twitter and all these other existing platforms. >> I mean my my sort of counterpoint or bullcase for Sora 2 is actually think the status game was about humor more than anything else. And humor is the

intersection of knowing how to prompt and sort of being culturally aware.

>> So I think that if they iterated on that, that's like a direction that nobody has captured before.

>> Yeah. Yes. But if you can export those videos, isn't it true that like Tik Tok with Sora videos on it is strictly better than Sora? We talked about it so much where like the ultimate

>> social product is where consumption and creation both live together and that the output of it is not native to other platforms like Tik Tok, like YouTube shorts.

>> So what do folks think of the challengers? You know, we're talking

challengers? You know, we're talking about sort of to I mean Meta, it's crazy to talk about Meta as a challenger. I

guess in this context they are, but I think Claude, Perplexity, Grock are the more obvious names for challengers.

Olivia, what's your take? I love Claude.

I talk to Claude all the time. Claude is

somewhat replaced ChachiBT for me as my general LLM. I think Claude is

general LLM. I think Claude is opinionated in an interesting way. Um, I

also love Claude because I'm willing to invest time into building out AI workflows. I think Claude actually

workflows. I think Claude actually launched a lot of really powerful things this year around like artifacts and skills where you can essentially set up tasks or workflows to run um over time.

I do think the reason it hasn't hit the mainstream yet is even the way they built those things is geared towards a technical user or an engineer. It's I

think they tried to make skills as easy as they could to create >> and it still was not anywhere near easy enough for the mainstream consumer. Um,

another example would be they were actually the first of the big players to kind of launch file creation, slide deck creation editing >> and they branded it as like file

generation and analysis or something and it was like a toggle feature within a setting bar of a setting bar or something. So like very few people used

something. So like very few people used it and yet to me it's still the best product across all of them at doing that kind of complex work. So, I love Claude,

but I think if they want to be a true mainstream consumer product, >> they need to dumb it down even more in terms of accessibility. There was that

survey you found recently of US teens.

Yeah, there's um I think it was three times more US teens have ever used character AI than have used Claude.

>> Yeah. So, I think that shows that like research a pretty broad thing. Yeah.

Cloud is beloved amongst tech people but outside of tech people I think they are maybe struggling to pick up relevance.

>> It is interesting though like if you look at the sort of aesthetics the product design the craft like three things that anthropic did were MCP skills and command line interface quad

code like those are three surprising bets especially cloud code. I would have said command line interface really like is this the way that people want to air >> I thought you were going to talk about taking over air mail and the thinking

cap >> yeah that too consumer >> so three think where's the thinking cap but it's sort of very highminded design

it's sort of like versus or maybe that's apologetic on their behalf but I think it is it is that it's opinionated and it's great you know >> I do need to hear Justine's take on both Meta and Grock as I feel like they both

had fascinating years in different ways.

>> Yes.

>> So, Meta um hired all those researchers.

I think their strongest models are actually not consumerf facing models.

It's their SAM 3 series. So, like the segment anything for video, for image, and for audio. And basically like for video for example, you can upload a video and you can describe in natural

language like find the kid in the red t-shirt and it will find and track that person across every the entire video even if they're coming in and out of the frame. It will let you apply effects

frame. It will let you apply effects like blurring them out or removing them or whatever. And you can imagine a

or whatever. And you can imagine a similar thing with audio um with different stems and then with image with different objects in an image. I think

we're going to see next year hopefully some incredible consumer products built on top of those models, but today they're more of a playground for developers than they are a consumer, >> which is surprising given just like the

DNA of the company.

>> Yeah. There. So, the one good consumer feature I think they've launched this year with AI is the Instagram AI translations where um when you're uploading a real now, you can opt in to enable

translations and it will clone your voice, translate it into five different languages, um apply the translation with your voice, so you know, and then redub with with the lip sync. Wow.

>> And so, it basically makes it seem like you're a native speaker in whatever language. So, I would love to see more

language. So, I would love to see more of that stuff come to to the meta products. Um, Grock, I think, has had

products. Um, Grock, I think, has had So, Grock had a crazy year with like the companions with all of the LLM progress and the coding progress. Um, I think their image and video progress is

probably the steepest slope I've seen of any of the companies. like

>> um it was probably like 6 months ago they didn't even like have image and video models and they're shipping so fast to launch new features. Like it was initially just image to video. They

added text to video. They added audio.

Then they added lip sync with speech.

Then they added 15-second videos. Like

they are just not slowing down the speed of progress. And Elon has made a bunch

of progress. And Elon has made a bunch of statements about like wanting more interactive video game type content out of Grock and wanting movies out of Grock by the end of next year. So, um, let's

hope it continues to go at that pace.

>> Do you feel like it's a pinser movement where like on one hand there's like a very, uh, infrastructural model layer of like let's get to the let's top the Ella Marina charts and then the other one is

like let's go Annie.

I think >> it's like a little bit of like a bifurcated move, >> right? Like the entertainment and the

>> right? Like the entertainment and the like >> Absolutely. but entertainment in a way

>> Absolutely. but entertainment in a way that like we're talking about, you know, anthropic and chat general population, but you just said character AI is way more popular.

>> Yes.

>> So then like how do we think about that?

And I think, you know, it's it's a very interesting strategy in my mind. and

Grock like in the image and video app um since pretty early on they've had templates of popular things like you're standing somewhere and suddenly like a thing drops a rope drops from the ceiling and you grab onto it and it like

swings you out of the scene like some really good ones that go viral regularly on Tik Tok and other places.

>> Yeah, really really interesting. Well,

so maybe switching gears from 25 to 26, what are some of all your predictions for next year? But what do you think we'll see? Hardware models, commerce we

we'll see? Hardware models, commerce we haven't spoken about yet. So what do we think will play out?

>> I think I know this is we're talking about consumer, but one of the things that's been really maybe underrated for me about chatbt that we might see more of next year is they've really made a push into the enterprise both with the

traditional enterprise licenses and then working with specific companies to even like train models for them. And I think when we think about the fact that most consumers only use one general LLM

product, um, ChatBT enterprise usage, they published a big study, but it's up somewhat like eight or 9x year over year. And so if we're entering a world

year. And so if we're entering a world now where people have to use chatbt for their company or as part of their work, that could really translate into

consumer usage. Um, or maybe they become

consumer usage. Um, or maybe they become the the workspace with the connectors and some of the other things that they're investing in. um and someone else owns the consumer consumer use cases.

>> I think to that end we have to talk about their push into apps and I think whether or not that works is going to be kind of the defining question for them next year.

>> Yeah. And I think that the we've all discussed the importance of the apps SDK and the apps directory as they're calling it and it's going to be a huge new channel for consumer. I think what's less discussed is it's hyper relevant to

enterprise. So I think where Chad GPT

enterprise. So I think where Chad GPT shines is where it's able to operate across a number of tools for one workflow. And if you think about the

workflow. And if you think about the number of things you do in your sort of business dayto-day that operates across many tools, it's it's most of those things.

>> Yeah.

>> So I think that will have very interesting implications for the SAS ecosystem and it's a part of the app store we're not talking about as much.

>> Yeah.

>> Yeah. um maybe less of a prediction but um thinking through 2025 and we talked about all the big moves from big labs and from the startup point um I think

one of the biggest trend we've seen is app generation >> and I think uh there's a real world where we see the big labs uh with the distribution and the frequency of usage

of people coming in to start saying look like maybe there's a common type of product and apps that we could actually help you generate within the confines of the big lab products.

>> Yeah, >> I think that's like one of the interesting thing which you know again going back to the supply chain of ideas and research maybe that's one thing and again nothing groundbreaking but as we

know um the Gibli broke the internet my cousin who who knows nothing about uh tech >> sent me a Giblly photo.

>> Um >> well let's send this to your cousin then.

>> Yeah. Uh,

and I think that goes to show that templates matter.

>> Yeah.

>> That style matters.

>> Yeah.

>> And I think about video and like it's pretty freaking good.

>> Yeah.

>> And it's possible that we're already at a point that it's not necessarily just about the capability of models of the big labs, but the stylistic things, a

template. Think of Tik Tok. The large

template. Think of Tik Tok. The large

capability is largely still the same.

Music trend, dance, go.

except the trend and format keeps on changing and keeps it extremely fresh.

>> So I feel like there's a real world where um the repurpos have you can start thinking about ways to actually really build in >> video first products into these lab

models and I think the cost will go down enough for people to try it out and I'm excited to see that. Yeah, I think what I'm most excited about is sort of along those lines, basically everything

becoming multimodal. Like I call it like

becoming multimodal. Like I call it like anything in to anything out, which is basically initially, especially with these image and video models, um it was

you put in a text prompt and you get an image out or a video out. You couldn't

really do much with it. And now um we've started to see this with the image edit models with like nano banana and with flux um and and with the new openi model

where you can put an image in now and get an im another image out.

>> You can put an image in with a text pair and a direction or put an image with a template another reference image and get another image out. um what happens when you can put a video in and get images

out >> uh that are related to or the the next iteration of the video or you can put a video in and a text prompt about what you want to edit and get the edited

video out. Um, from my conversations

video out. Um, from my conversations with the labs, a lot of them are trying to basically combine all of these largely separate efforts they've had across like text reasoning and

intelligence, the LLM space and image and video into like what if we can put merge those all into like a mega model that can take a lot different forms of

content and and produce much more. I

think it's also going to have huge implications for like design. Um because

if you think about it, a lot of design is combining images with text, with video, with different elements in kind of interesting ways.

>> Yeah, I guess if I think about like a macro level prediction, I think it's actually going to be more of the same in that when we talk about what all of the cons the labs have

launched in consumer, they've done a great job with models and they've done a great job with incremental things that improve the core experience of using like a chatebt or a Gemini.

>> Yeah. In my opinion, we've gone through dozens of things that they've launched or tried as new consumer products or new consumer interfaces like group chat, like pulse, like atlas, like Sora,

Google has had a long tail, like Stitch, Gems, Opal, Doppel, tons. None of those are really working. And I think it's because it's not the core competency of

these companies anymore to build opinionated standalone consumer UI. Out

of all of those, I think the product that's working the most is like Notebook LM. And that's one of like maybe 20

LM. And that's one of like maybe 20 things that Google has tried or experimented with. So I think it's

experimented with. So I think it's actually very positive for startups in that consumer startups in that the models would keep getting better which the startups can use and they'll keep you know they'll make Chat GBT better

and better but I don't necessarily think that ChatBT >> like verticalizes into all of these other amazing use cases or products and there's still room for startups to be building there.

>> I have a yes and to that. Okay.

>> Where absolutely >> but however when the input and the output is text >> where Chachi PT and Gemini of the world shine the most. No matter how deeper you

go no matter how specific you think your text output is going to be essentially given the frequency of usage of the main big lab products I think it's going to be really hard to stitch that and get

that away from that usage if your product is mainly text and text out.

>> Yeah. So I I do think you you have to be creative around what is the angle that you can like go steal people away from.

>> You know, I love that you use the word opinionated because I think that for labs um certainly for big tech and perhaps increasingly for labs, the priorities get set in the promo committee always. And if you're a PM and

committee always. And if you're a PM and it's always the sort of mid-career PMs and I've been one of these and you like the incentives are always to get promoted and the way to get promoted is to build something safe that extends a

core metric and a core feature. So

building opinionated products is a very risky way to manage your career, you know, because they're probably not going to work. They're probably going to have

to work. They're probably going to have a bunch of implications for legal and compliance and the CEO might yell at you. So I just think that they are so

you. So I just think that they are so structured to do incremental things. The

more founders do opinionated things, the more advantaged they are.

>> Yeah. I think honestly the big thing we haven't discussed here too is compute which is the labs have this inherent tension between um there's a limited amount of compute and they either spend it on like training models or they spend

it on inference and even with inference there's this split between like the entertainment Giblly use cases and the like coding intelligence use cases. I

think XAI is probably the only model company that is not bottlenecked on compute from my understanding. Whereas

the others have to make really like serious and significant calls of like if we let if we release Nano Banana and go super viral like it may slow down the next like big LLM we're trying to push

forward. Whereas startups who focus on

forward. Whereas startups who focus on the app layer don't have that problem because there's there's no tension there.

>> Absolutely. Yeah. I look we've talked about this before. But I also think that there are categories in which being multimodel is just allows you to deliver a better proposition to the customer and the labs and big tech are always going

to be sort of definitionally first party model only. So I think as all the models

model only. So I think as all the models get better perhaps 80% of what you need can be received from a single model but for the power users and so much of AI is a power user story. You know, you always

said that like, well, power users are just power users. And I think that's true in a preAI world, but now the kind of depth of value and the depth of monetization is so much higher that maybe all of AI is actually a power user story, you know, and everyone else is

just traffic.

>> Yes.

>> Yeah.

>> Which is why we're also seeing like consumer products for the first time ever have more than 100% revenue retention. Yes.

retention. Yes.

>> And that's separating the good from the great from the exceptional in consumer AI world. And to be clear, how that

AI world. And to be clear, how that happens is um they charge for usage often in addition to a subscription. So

you can use beyond whatever your quota is for the month given your subscription and pay more.

>> It's either upgrade of the tier or actually buying tokens or more more usage.

>> Yeah.

>> It's that's what differentiates it. Like

you know if you told me preai we see a consumer company with 100 plus retention and money. I'm like that that doesn't

and money. I'm like that that doesn't make any sense. Doesn't that compute?

>> Yeah. Yeah. No pun intended.

>> Exactly.

Exactly.

>> Well guys, okay, maybe let's talk about start specific recommendations like after this pod, what are the products people should download or the features or the models? What should folks be using today?

>> I I guess on the multimmoal point, I think one really um underhyped product that people should check out, not because they'll use it every day, but because it shows sort of what is possible when you combine like an agent

with image with text is Pamelli. So,

this is like the Google Labs product where you put in the URL of your business, and it has an agent go to the website, pull all of the product and brand photos, summarize what it thinks

your brand's um aesthetic is, what it stands for, what kind of customers it's targeting, and then it will generate three different ad campaigns for you.

And it will generate not only the text but it will generate like the Instagram post. It will generate the flyer. It

post. It will generate the flyer. It

will generate like the the photo of your product and this you know whatever wherever it thinks it should be based on your customer. Um and very cool product

your customer. Um and very cool product would be hard to become a giant standalone product within Google I think but um shows sort of the future of what happens if we combine agents with

generation models that have sort of really deep understanding of context that an image model or a video model normally wouldn't have.

>> Startup products though. Do you have a favorite startup product creative tool?

Yes.

>> In creative tools. I think I I mean we're investors in Korea, so this is bias, but I think they they've they've really done an exceptional job of um being the best place to use e every

model or every quality model across every modality and also building more of the interface on top of these models.

Like I now prefer to use Nano Banana Pro on Korea because Korea allows you to save elements which are essentially characters or styles or objects that you

can like at tag to reprompt versus having to drag the same image reference into Nano Banana over and over again.

>> That's a good one. Um I suppose it falls under startup category again shilling companies but you know um the one that I use the most is actually 11 laps reader

>> and the reason is uh we've seen an explosion in podcasts and there's I think a reason for that right people are a lot more on the go the reading uh capability of us reading I think is

going down over time and so you know let's not fight the reality let's embrace it and okay so then like let's actually find the written material translate into listening and and do

that. And I used to be a power user of

that. And I used to be a power user of tools like Pocket.

>> You know, I didn't have time to read everything that I wanted to read. Um,

and it's a saving behavior, right?

You're you're going around and saving all the things you eventually want to consume. But I think what I do now is

consume. But I think what I do now is similar where I go get all the things I want to read and I just put it I like either PDF it or put it on 11 reader and just like once in a while when I'm on a

walk and I have like 3 4 minute, you know, 1.5x speed or 2x speed and just listen to one of these and get the gist of it. So I think that's been a good way

of it. So I think that's been a good way to use a little bit of time as a sort of semi-normal person.

Well, first of all, I love this question because I am strongly opinionated that by far the best way to get up to speed on AI is just to try a ton of products and you get opinionated really quickly.

>> Um, Justine and I actually for the whole month of December are on Twitter publishing one new consumer product a day for people to check out. So, that's

one way. Um, I'll name three others that I think are super maybe relevant or interesting that people can plug into their workflows. So, one would be Gamma

their workflows. So, one would be Gamma for slide deck generations. You can go text prompt to slide deck. You can go document to slide deck. I use it for everything. Also, the slides are

everything. Also, the slides are flexible sizes, so you're no longer like editing every little pixel in your Google Slides to get it to fit into one, which is great. Um, granola for note-taking. You might not have any

note-taking. You might not have any meetings over the holidays, but in the new year, um, and it just gets better and better the more meetings you have on it because it has the context of what you talked about before. And then

lastly, I'm still going to plug try the comet browser if you want to try kind of an AI native workspace. I think that's one of the most accessible ones to start with.

>> I mean, for me, I've spent my whole year obsessed with coding and AI code. Um,

it's just been so tremendously fun. I,

by the way, Brian would take the other side of your argument that the big labs or big tech will win in app generation.

I think they just lack the focus.

Products like Opal have been, you know, released with a wimper and they're one model only. So

model only. So >> I I didn't think they will win it. I

think that we will see them doing it.

>> Yes. Yes. I I think that's true. But I

think for the pure consumer side, of course, Wabby is is really fun and really capable and I think they are they're creating the right sort of um uh constraints on app generation so that

you can get a really satisfying functional result. And I think so far

functional result. And I think so far there's been a lot of overpromising in app generation which has discouraged the early users. I also think if you haven't

early users. I also think if you haven't tried you know GBT 52 in codeex or in cursor it's worth trying even for nontechnical people it's just amazing. I

think almost being technical is sort of a constraint because you have a pre-existing idea for what these models can do and they can do a lot more and I'm hearing increasingly about people doing knowledge work and writing essays

in cursor instead of just writing code.

>> Wow. Just one thing I'm going to do at the end year year end is just to plug in like a popular trend I seeing on Tik Tok where there are people who said what is a most unhinged thing I said this year.

>> Okay. And it actually does a review of of of all the things that you said. But

I think similarly it'll be a good thing.

I'm going to do this at the year end.

>> Tell me how to live a better life next year.

>> Yeah. Yeah, give me give me actual unvarnished uh opinions and and some direction that I think it'll be helpful.

>> I love that idea.

>> I'm going for a worse life next year.

>> Fantastic.

>> Let's go full DGN, guys.

>> Any closing thoughts?

>> That wasn't I mean >> the obvious one is we are very actively investing in consumer companies and and I genu I think a lot of people say this.

I genuinely believe that the models have gotten to the level of quality that you can build a real scalable app on top of them. Wobbby is a great example of this.

them. Wobbby is a great example of this.

Um and so the hope is 2026 will be a huge year for consumer builders. Um not

just like consumers as um like consumers being consumers of a product.

>> Yeah.

>> Yes. Well, thank you all for a super fun year in consumer in AI. We'll be back with more next year and merry Christmas, guys.

>> This is a wrap.

>> Yeah. Happy holidays.

>> Happy holidays.

Loading...

Loading video analysis...