LongCut logo

A Founder's Playbook for Shipping 10x Faster with AI | Yana Welinder

By Peter Yang

Summary

Topics Covered

  • Founders Execute While PMs Debate
  • Ship Rough, Fix with Users Fast
  • AI Prototypes Iterate Per Customer
  • AI Feedback Merges Quant and Qual

Full Transcript

There's this fear that, oh, if I ship something that's like too rough, people are just going to be like make up their mind and then not want to use the product later when it got better. The

only way that would happen is if you're super slow. Whereas, if you really

super slow. Whereas, if you really follow up with the user immediately and they're excited to try it again, then you don't really have that problem.

>> Anything that kind of trades off speed needs to be really carefully thought through.

>> By the time folks are done debating two good options, I will probably have execute on 10. Do I have any emails I need to respond to today? [music] and it will like review my mailbox and identify

anything that's like urgent to respond to today. I've just gotten like 100%

to today. I've just gotten like 100% more effective in my in my work because I now have this.

>> Okay, welcome everyone. I'm really

excited to have back on the podcast Jana, my friend and now head of AI at Amplitude. You know, I I think Jana is

Amplitude. You know, I I think Jana is like top 1% chat GP power user. So

really excited to talk to her about how she uses AI for work and finally what product leaders can learn from founders since she's worn both hats. So welcome

Jana.

>> Thanks so much for having me. Super

excited to chat.

>> Yeah, why don't we start with the uh most recent topic. You know over the weekend I I tweeted something about I tweeted something about like how cursor scale to billion uh valuation without

any PMs and then it became like a big viral post about how like PMs are not valuable anymore. [laughter]

valuable anymore. [laughter] So yeah, so you've been both a founder and now a product leader at a large company. I guess what is different about

company. I guess what is different about these two roles and what what can product leaders learn more from founders?

>> As a product leader generally, which is kind of what I was before before starting Grapple, you do have to kind of hone in on your product sense and have good product strategy and manage PMs and

all that stuff. But it's sort of like a much more isolated role I would say in in sort of the organization.

>> Mh. Whereas as a founder obviously it's your baby, right? Like Franchesci talks about we're like the biological parents of the company. So it's just a completely different role where you're

you have intuition around what's going to be good for the company in a in a completely different way. Not just from a product perspective, just generally in every possible way. And I think coming

in now into another organization where I am not the biological parent of this company yet I kind of am brought in to still be as much of a founder as I can

be. That's and I think a lot of like

be. That's and I think a lot of like whenever whenever like larger companies acquire startups they do it and in our case we were acquired for the product we're also acquired for the team and so

in some sense my task is to try to kind of like maintain as much of my like founder perspective as possible and and I'll say you know after being a founder it's very easy to do that it's actually

really really hard to try to fit into an organization and have any other kind of role and so you kind of come in and you're sort of >> you can't help thinking how you know how would I want this company to run like if this was my company how would I want it

to run and then like just trying to act on that.

>> Okay. So, so like there's a lot of talk about how uh as IC's they have to, you know, IC have to wear multiple hats. You

can't just be a designer or PM anymore.

You got to learn how to prototype. You

got to learn how to do a little bit of everything. So, I guess even at the

everything. So, I guess even at the product leader or exec level, you should also >> well probably even more, right? You

should probably be aware of what's going on with marketing and engineering and everything else.

>> Absolutely. Absolutely. I I think I think that's actually like the interesting thing about product is that at any stage in product whether you're a product leader or you're like a product

I see a PM really you have a super cross functional role even though like as PMs we often at least historically have been very like here's what defines my role at

my organization because like PM at every organization is different now absolutely you kind of need to be super crossunctional like not crossunctional crossunctional makes it sound like you're like coordinating with other people whereas is actually [snorts] you

do need to do you need to do that work yourself like you have to design you have to user research you have to try to write code as much as possible particularly now that there's so many

tools where you can ship code so I think that it's become pretty inevitable to be all the things um and I think this is a bad thing I think that's actually like it gets everyone moving much faster and

it make sure that people are like feel much more ownership because you're you're never sort of like this is my role and everything else is like someone else will do that. Yeah, that that is the key because I was going to follow up

with you on this crossf functional thing, right? Because because uh you can

thing, right? Because because uh you can like like a lot of PMs at larger companies feel like kind of like a glorified cross functional secretary or you know kind of like trying to align like 10 different stakeholders trying to

align the leadership and then they have all these internal debates and and like document writing, right? And and like trying to get everyone to agree on the same decision on a path forward.

>> Yeah.

>> I don't think that's what you're talking about, right? Or like

about, right? Or like >> No. Yeah.

>> No. Yeah.

>> Yeah. Yes. Absolutely not. I'm

definitely not talking about that. I

mean, I I think I've forgotten just how much of that there is and and now being being back at a at at a bigger company, I sort of realized I am, you know, I'm most certainly not the debate girl and

I'm not I'm not going to be writing any like docs uh to coordinate stakeholders.

Uh and I think that, you know, by by the time folks are done debating two good options, I will probably have executed on 10. And I think that that's how

on 10. And I think that that's how founders move. And I think that every PM

founders move. And I think that every PM should really do that given that's kind of how fast AI moves.

>> But how do you get the freedom or the agency to just like ship stuff?

>> Uh you know actually now at Amplitude we or Pens has recently banned decisions by committee and so specifically to enable

PMs and engineers to be kind of owners and be able to move fast and be able to ship fast. we've had we've been able to

ship fast. we've had we've been able to ship particularly in in with the AI products but really um any any product have been able to move much much faster as a result of that and I think that

it's important that leaders enable their teams to move fast and and have that ownership and yeah it's kind of hard to do as like at at the IC level to say

actually my company's now going to be AI native >> and I'm gonna be AI native and I'm not going to talk to anyone I'm just going to go and ship I I think that that is of course incredibly difficult. You kind of

need to get support to be able to do that. So it does need to come from from

that. So it does need to come from from the top and I think probably if it's not coming from like the founder CEO, it's definitely something that product leaders need to advocate for so that

their like teams can ship at pace that's like relevant today and and not kind of like become outdated.

>> What led him to ban this uh decision by committee? Was this move like moving too

committee? Was this move like moving too slow or like >> Yeah. No, I mean definitely uh it

>> Yeah. No, I mean definitely uh it definitely was a desire to be AI native and that's kind of why like why he acquired essentially multiple startups at the same time. There was sort of the

idea was to bring in more AI talent and folks with experience moving at that at that speed and then get the whole organization to move the same. Um so

sort of a very kind of big strategic change that was kind of just one piece of the bigger puzzle.

>> You know Amplitude is like a very successful company, right? got is worth billions of dollars. But I I bet that Spence probably looks fondly back on the early days where he can just like, you know, ship stuff and move fast and

[laughter] he probably just wants to get this company back to that state. That's

probably what he wants.

>> I think that's part of the story. I

think the other piece is that he really he's very cognizant of how the world is changing and what companies are sort of doing well and what companies are not.

And there are bigger companies including public companies that are adopting a AI native approach and and are really kind of changing how their teams move >> and ship. And it's just it looks

different from kind of what public companies were doing like two years ago.

Um and and I think it's been very kind of strategic and in making sure that Amplitude is in that boat as opposed to kind of the companies that we'll need to kind of be catching now.

>> I mean keep saying AI native but it's really like a culture thing, right? It's

like I think this new breed of companies like cursor and um you know openai and anthropic and all these other companies are basically trying to keep headcount as low as possible and just try to like

empower everyone and to just work with AI and just figure stuff out and ship because if if if you don't ship fast into companies then you're the company's going to die. [laughter]

>> Exactly.

>> So you kind have have to do it. Yeah.

>> There's no guarantees right now, right?

Like you kind of have to you have to move as fast as AI startups are moving because nothing is taken for granted.

Got it. So, so, so then uh you know, you probably have a bunch of PMs or other folks reporting to you. How do you encourage them to move fast and maybe make some mistakes and you know, not feel bad about it?

>> I think the best way to encourage folks to move fast is to move fast is sort of like lead by example and and show, okay, here's a decision we need to make. How

would we make this decision in a startup versus how would you make this decision in a big company? If I can kind of primarily show what how would I approach this? I think I' I've done a lot of that

this? I think I' I've done a lot of that recently in terms of just like okay we have this launch. how would I do this launch? And I'm I'm just I'm going to do

launch? And I'm I'm just I'm going to do this launch the way I would do this launch so that other people can see that it can be done in a certain way and it's going to be faster and and better and uh

and it will leverage AI in like 90% more places than uh than you would in a in a bigger company and and folks are getting to see that and they're so like okay I can just next time we're doing some other launch we can do that too and um

you know so I think it's I think it's a lot of like killing instead of trying to like win an argument because I think that like otherwise it ends up being the default is to debate things, right? And

there's no time to debate things. So,

the only way you can you do it is to show that something works and um and then get people excited about doing it too.

>> Yeah, that's a good point. And like I also feel like there's should have like higher tolerance for fail failures, right? Because um if you move fast like

right? Because um if you move fast like some things are not going to work out or maybe there's some mistakes and and then like and then people who are like used to process are like you know, hey, I told you so so let's add like five more

steps to this process. But like that that's that's like a I I I think anything that kind of trades off speed needs to be really carefully thought through basically you know >> I I completely agree you know I think

that one thing that is really helpful to show particularly around failure is that often times when you fail it's an opportunity to do even better particularly when you fail with customer

scam you are customers and users and you really act fast on the failure. So, so,

so, so one thing is sort of like if you if you ship something and there's some kind of like rough edges that you can work out with with the users quickly as they're reporting it. Uh, the reaction

of a user who had some initial frustration, reported it to you, and you fixed it within like 15 minutes is going to be so much better than a user who just was like me happy, you know, it was

shipped the first time. like the the impression that oh I I I came in I said I needed this to be better and and the company actually went in and did that and they followed up with me. Those

users tend to be like your strongest allies. So I think that that's another

allies. So I think that that's another way of showing like actually feeling a climb as long as you constantly are readjusting and and acting fast.

>> Yeah. Like if you're co-creating the stuff with your users and you're listening to them and fixing all their bugs and feedback like yeah they're probably even a stronger allies than if you just got a perfect product right off the bat. So

the bat. So >> yeah. Yes. Yeah. Yeah. I don't know why

>> yeah. Yes. Yeah. Yeah. I don't know why more companies don't do this like just just like you know just share all the stuff with users along the way and like you know you can like ship to more concentric circles as a product gets

better. I mean even even like OpenAI

better. I mean even even like OpenAI devices right like they're they're like super successful and they like they launch features pretty they're pretty MVP stuff that they launch and then they iterate and get better along the way.

>> Yeah. Yeah. And and I think that the thing is there's this fear that oh if I ship something that's like too rough and ready that people are just going to be like make up their mind and then then not want to use the product later when

it got better. And I think the the key is to make sure that's you don't lose their interest along the way. Like the

only way that would happen is if you're super slow whereas if you really follow up with the user immediately and make sure that they get an update and then they are excited to to try it again then you don't really have that problem. And

that I think that's what Openai has done well, right? Like they ship things and

well, right? Like they ship things and then if there's like some things that doesn't work the way you want the fast follows happen like the next day, right?

Or even the same day. And that's really important and they're really like vocal about it and make sure everyone knows and um things like that. I think that that's not how uh companies were used to adjusting for failure. And so I think

the expectation is just different.

>> Yeah, that's a good point. It's like

it's like you know if you sell me something and it's like crappy and I'm like hey y this sucks. [laughter] Yeah.

>> But but then but then you actually listen to me and I make it better. Then

I'm like, "Oh, okay. She actually

cares." So then I I'll stick around for a little bit longer.

>> Yeah. [laughter] In particular, if I move fast, then you understand that the reason sucked a little bit to begin with was because I was moving fast. And you

sort of started appreciating that that we were like, "Okay, this is cool. It's

like moving very very fast." Like

imagine how good it's going to be in a year if it keeps moving like this, right? It just sets a very very

right? It just sets a very very different tone and expectation around everything versus like slowm moving things that are just like polishing things to perfection. By the time it's done polishing, it's like not even the thing you wanted.

>> You ship something and then people are reacting to it and like you just like you wait for like a press release or something. It's like [laughter]

something. It's like [laughter] >> Yeah. Yeah. Yeah. Yeah.

>> Yeah. Yeah. Yeah. Yeah.

>> Yeah. Yeah. It's you got to be active. I

think the issue is that if you're like optimizing for avoiding failure at any cost, you really are optimizing for avoiding any kind of discoverability or

adoption because because ultimately it it ends up being like if you you know you polished for too long and you're too careful and and ultimately that's not that's not how you win like it's really

really hard to win like that.

>> Yeah. I mean I I think in defense of like polishing stuff like you know like I I feel like uh you can roll out to like 10 users like a really crappy product and give their feedback and you polish a little bit more and roll it to

100 and slowly get it more and more. Uh

like you don't have to roll out like a super crappy product to like all the users right away. [laughter] So that's part of No, no. Yeah. Sorry. I I I should have I should have been clear. I

think that you should definitely have the product be as polished as you can at the speed that you want to be moving at, but you can't obsess over it being like absolutely perfect like until until you like roll it out to any users. Like

that's the problem.

>> I think there's a difference between polishing with like real customers, even like 10 customers versus polishing internally through like debates and like trying to like you know just trying to get everyone aligned and and then you

you have like a super compromised crappy product.

>> Yes. Exactly.

>> Yeah. Yeah. And and I think that that's that's such a great point because there is this feeling internally that oh we're making progress. We're aligning all of

making progress. We're aligning all of these people like >> but really it's just a bunch of people who love being right and so they're all trying to be right and and then they get like [snorts] some amount of like

[laughter] >> I don't know satisfaction over how much they were right in that conversation but really there's like actually zero progress.

>> Yeah. Because you never know. You're not

the customer. You're not the customer.

So >> yeah.

>> Yeah. This episode is brought to you by Optimize the problem in marketing usually isn't a lack of ideas, it's a [music] lack of time. If your to-do list keeps getting away from you, then take a look at Optimize the Oppo, an AI agent

platform [music] built specifically for marketing. With OPPO, you can use AI

marketing. With OPPO, you can use AI agents for SEO and geo recommendations, AB testing, [music] website analysis, and much more. OPPO knows your brand inside and out, and plugs into your

existing tools and data systems so that you can save time on labor intensive feature testing, reporting, and more.

see what it can take off your plate at optimizely.com/ai.

optimizely.com/ai.

Now back to our episode.

>> You're open AI and chat power user. So

let's talk about how you use uh this stuff to kind of like you know do your job or like you know >> I do a lot of AI prototyping which isn't particularly new and and and lots of

folks do that. I will say that probably what may be a little bit unique in how I do it is that usually when I'm doing an in the eye prototype other than when I'm doing it just for myself to see what what it would look like I'm doing it to

show it to customers and I will oftentimes will have a prototype that I'll u jump on a customer call get lots of feedback from that call and then immediately I usually will take that

feedback and iterate on the prototype and incorporate all the feedback before my next call and then I'm on the next customer call now they're looking at a completely different prototype that's been adjusted based on the first

customer and then I keep like having those sort of like iteration cycles and I feel like that's something that was you know completely impossible to do before. You would always have just the

before. You would always have just the one prototype and then you would show that prototype to like 10 customers and then you would never know that if the stuff that the c the first customer told you is it actually resonating with other

people or like now you kind of have to like reconcile all the feedback you've been getting. it's really really hard to

been getting. it's really really hard to know is it actually like impactful. I

think now you can move much much faster with like that those are like iteration cycles. So that's been that's been a

cycles. So that's been that's been a really good one for me. Another has been to use Sora a lot in like marketing collateral. We we just did the launch of

collateral. We we just did the launch of EI feedback. We ended up using a lot of

EI feedback. We ended up using a lot of Sora videos and and also not Sora videos. We had actual like full on like

videos. We had actual like full on like recorded demos of me showing the product which you know I've done a lot in the past but and have always had that in launches and then we had just like

really fun sort of videos and it was really interesting to see the engagement on social and how it was different for those two types of formats because for

any kind of videos on social I don't know like I'm sure you you've looked at this too for when you have a video snippet usually by the end of the video when you look at the the analytic or for

the video very very few people have watched it through the end. It's like

the drop off is kind of crazy on Twitter but for sort the sort of videos was actually sort of like 50% sometimes above 50% were watching the whole video through the end which was incredible to

see. I've never seen data like that

see. I've never seen data like that before. [laughter]

before. [laughter] >> I don't know how I actually had someone comment on Twitter and was like this is incredible post. I like I I I came for

incredible post. I like I I I came for the video I and then stayed for the copy and I was like what was it about the video because I was sort of like at right at that time I was looking at the analytics and I was like what that's

that's really fascinating like I'm sort of like so curious why is this different and he was like well you know I kept watching the video and I was like is this AI is it not AI definitely AI actually maybe not AI he kept having

like that debate in his head of like am I what am I looking what am I watching um and and I just thought it was really really interesting it's just like completely different way of engaging being hooks.

>> What kind of sort of videos do you make for the product? Like it's not a product demo, right? It's like people using the

demo, right? It's like people using the product or >> No, they're just kind of like funny videos like about related topics. And so

this particular one was uh a video of me at the cemetery and [snorts] like leaving flowers on a grave for NPS. And

so it was like a stone that said like um red NPS uh uh survived by real user feedback or something like that. And um

and so it like looks very realistic uh of course because of the Sora video. And

then I had the whole copy on like how like MPS never worked and and why uh you know user feedback is is a much better way of >> Oh, that's learning. Yeah.

>> Yeah. Maybe I should make some sor videos for some meme memes like that.

That's good. Some PM memes. Yeah.

>> So that's that was a really good one.

And then the other thing I like the thing I use every day is um I use the chat atlas uh browser. And so that I use for like writing everything, summarizing,

you know, interacting with my mailbox. I

use agent mode quite a bit. So just like lots and lots of different use cases where I feel like I've just gotten like 100% more effective in my in my work because I now have this.

>> Maybe you can show some of that because uh when I try Okay, to be fair, I only tried Alice for like 10 minutes, but [laughter] but when I tried it, I was like, what?

This is just like you know this is just like prompting chatgbt in the browser URL but like maybe you can show me some like cool use cases.

>> So one kind of thing is like I may just kind of just like do something as simple as like do I have any emails I need to respond to today.

>> Mhm.

>> And it will like review my mailbox and identify anything that's like urgent to respond to today. So yes it like triages it pulls out what needs to be responded

to. nice to have uh just like FYI emails

to. nice to have uh just like FYI emails that aren't really necessary. And then

so then I can kind of just like go to one of my emails and let's pull up something that's like clearly not necessarily something I would want to actually respond to, but let's let's do

it just for the sake of it. Um and

respond to this email. It politely

rejects.

>> Okay, I see.

>> Awesome.

No m dashes. [laughter]

>> Yeah, maybe. Yeah. Got it.

>> Yeah. And so I think you know like this um this is great. So like we'll we'll write my email. I could do that and I can probably like I I can also just like do this with agent mode and do this at scale with a bunch of emails and just

have it go through and write emails that are just like responses to people and then I can go through later and review them and send them off. But what I found this to be really really helpful is if I

have emails that are like, you know, like are super emotional, like things I don't actually want to read, that's going to be just like annoying. Some

someone got too emotional for for some reason that's like not appropriate. Um,

then I can just like have chat GPT summarize the email in like three bullets so I know the the substance of it. I don't actually have to read

it. I don't actually have to read someone like overreacting on something.

And then I can like respond to the content and ask and then and then and then ask chat if you need to politely respond. And then it will usually

respond. And then it will usually include things like um you know I hear you or like you know I really appreciate your perspective. Like it will do the

your perspective. Like it will do the things that you need to do without like the emotional overload which is really really great.

>> Interesting. [laughter]

You get you get those kind of emails. Is

it like when you move too fast and ship something and somebody's like what? You

didn't check this box. Is [laughter]

that is that what happens? Uh we had we had a lot of customers that really really wanted craft to come out as quickly as possible. So that was that that was a good use case where it was just like I just cannot I can't I can't

have everyone's emotions right now. I'm

just like trying to ship this product as quickly as possible but I do want to engage with everyone and feel like I'm really you know.

>> Got it.

>> So that's a good example.

>> Got it. Okay. That that makes sense.

Okay. Uh okay. Got it. And and like you can probably uh I think you also use like agent mode to like unsubscribe to spam or stuff like that. I don't ever try that stuff. Yeah.

>> Yeah. Exactly. You can do that. And I

think like the one one thing that is like one of my favorite things is to interact with things in line. So once

you have written something you can be like is this clear and well written?

>> If I actually did want to write something myself which I sometimes do.

Uh then we can do that and we can kind of just like um you know update or replace this copy with whatever it does.

Yeah.

>> Yeah.

>> There's there's a lot of different ways.

>> It it's funny whenever I get my guests to do this demos like they always try to fix their spell spelling mistakes but I don't think chat cares about your spelling mistakes. It doesn't matter you

spelling mistakes. It doesn't matter you know that they understands like all the spelling.

>> That's true. Yes. Yeah. Yeah. Yeah. And

the thing is, I bet I bet this is true for all of your guests because uh it's certainly true for me. Folks who are just like use AI a lot have a lot of spelling mistakes because the more you use AI, the less the less good you

become at spelling.

>> Yeah.

>> And so I don't I don't normally like correct my spelling when I'm chatting with Chachi Pat, but on a podcast I feel like I actually have to like correct myself.

>> Yeah. I feel like I kind of worry that my like um because I I I think I'm a pretty good writer overall, but I've got like a super lazy. I just like dictate to AI and like and I'm like hey you know can you cut this copy by 20%. Or like

[laughter] and and if you if you ask me to write something from scratch again like I don't know if I can do it or not.

[laughter] >> I absolutely I think you know I use the the other piece I use a lot is um I use the voice mode here a lot because and I just dictate things. I have a

eight-month-old. So whenever whenever

eight-month-old. So whenever whenever I'm like with with her um this is just such a handy thing like you can get through so many things that I wouldn't be able to because now I'm like handsree

and and but yeah absolutely right. I

also as a result I cannot type and I cannot spell >> there's so many things that like >> I'm outsourcing all of that to AI.

[laughter] >> The only thing I can do is like uh order my little chaty between turn around like hey do this and do that.

>> Yes. Yes. Which is

>> Yeah. [laughter] Yeah. It was great.

That's great. Yeah. Okay, let's talk about this. You know, we all love AI,

about this. You know, we all love AI, but like what is something that you still do manually like that AI is not good at yet?

>> Yeah, great great question. I think that um one thing uh that to me I I've seen this I've seen this show up in a few different ways, but um prompt

engineering and writing evals is not something that AI can do well at all yet.

>> And I thought that was it was kind of interesting. There's some controversy

interesting. There's some controversy around this because which I didn't think so because I every time a new model comes out I try to have it do all the things including prompt engineer and I always sort of like okay not not this

one not this one you know maybe next one. Um, and then when uh GPT5 came out

one. Um, and then when uh GPT5 came out and I was reading the OpenAI prompt engineering guide for GPT5, they do have a suggestion in there that you should

use GPT5 to like refine your prompts.

>> And um or maybe that was in the eval guide. It was in one of the guides. Um

guide. It was in one of the guides. Um

and so I ended up trying that again and I found that actually the prompt I got was workable. like it was it was like

was workable. like it was it was like finally it could do some things like it could actually write prompts but it was sort of still like a step worse than the human written prompts. So I would have

this prompt that I used specifically for one step in our analysis pipeline where we use GPT4.1 um like a non-reasoning model for speed um because we just can't have it go go

and start reasoning and um in that prompt I could get something that worked as well as that prompt but only with GPT5. So GPT5 could write a good enough

GPT5. So GPT5 could write a good enough prompt that could work as well as a human written prompt for GPT4.1 but only with GPT5. So still wor clearly worse

with GPT5. So still wor clearly worse but but it was getting there like it was getting closer to it and you know one thing I've sort of like learned that

there folks use AI a lot for writing prompts which is not a great idea but I think particularly engineers tend to do it because they use cursor for a lot of things and so as a result they also use

cursor to write prompts and and oftentimes end up using actually much worse models in GPT5 or writing prompts um and they absolutely cannot do it like it's not a good idea And when you say writing prompts, you

mean like uh like writing from scratch or like editing a prompt that you have?

>> Both. Can't do either.

>> Interesting.

>> Well, yes.

>> Like for my little creator prompts, sometimes I have a back and forth dialogue with it and then like make a bunch of changes and then at the end I'll be like, "Hey, can you go update the original prompt to include all this feedback?"

feedback?" >> Yeah.

>> And then it it does do that. I I I do have to manually review it to make sure it's actually good.

>> Yes. But I I guess that kind of saves me some time, you know.

>> Yes. Well, yeah. So, I do that too. And

what I found is like usually um then when I write when when I when I then run evals on like different components of it, it's kind of like maybe like 10% of

the improvements worked and they still had to be tweaked by human.

>> Um and 90% made it actually worse. So

you kind of you can't you can't just take like the AI generated improvement to the prompt and just run with it. I

would say you know like there's like a lot of work that massaging but sometimes yeah it could propose something that could actually work but you still need to do lots of work to it to to make it actually perform better.

>> Interesting. Okay. Yeah. I I I guess it is kind of like an intern. You got to supervise it. You got to supervise what

supervise it. You got to supervise what it does.

>> Yes. That's right.

>> Yeah.

>> It's getting there. Eventually, we will be like the interns for it, you know, but >> Yeah. Yeah.

>> Yeah. Yeah.

>> But we're not there yet.

>> I I feel like I'm going to be a much lazier intern than AI, so I don't know how it's going to work out. [laughter]

>> Me, too. I will be fired immediately.

>> Yeah. Okay. Uh Okay, cool. Well, let's

talk about let's talk about Amplitude now, actually. Yeah. So, so you just

now, actually. Yeah. So, so you just shipped a big update. You know, one thing I feel about uh Amplitude and any kind of data product is is that um you can't just have the data like just

having the quantitive stuff doesn't give you the full picture. Like you got to have the user feedback and the qualitative stuff. It's it's kind of

qualitative stuff. It's it's kind of like what Jeff Bezos said, right? Like

if the data doesn't match the anides, like you should trust the anides.

[laughter] >> Yes, I can.

>> So I'm I'm I'm really glad that you actually shipped the the quality stuff as part of M2. I think that's a huge gap. So So do you want do you want to

gap. So So do you want do you want to actually show this product that they love?

>> Yeah, absolutely. And I and I completely agree. Obviously I will agree because

agree. Obviously I will agree because you know that was that was my baby. So

what I what I ended up shipping was is like we we've been integrating craft into amplitude and so uh amplitude feedback is essentially craftful in in

amplitude and uh but what is different and I will show you is how we've integrated it to show both quant and quall in one place because obviously

craffle was only quall. Um so at at a high level sort of what what it does is um you you start from just connecting your sources of feedback and so you can

connect support ticket sources or public data like app reviews or Google play and I think one one thing that's that's that customers are reacting to quite a bit is

just like how easy it is to connect things like if you if you wanted to connect app review all you you kind of like enter enter your the name of your app and then it just pulls off and collects it. It collects that data every

collects it. It collects that data every day and then um and then it gives you these prioritized lists. Um so you get like your list of feature requests, top feature requests from across these

different sources. In this case, I have

different sources. In this case, I have connected a bunch of like public data about Slack. So you have like um iOS and

about Slack. So you have like um iOS and Android reviews for the Slack app, YouTube reviews for Slack, uh and I think they also have like Twitter Slack

Slack mentions. And so uh what you get

Slack mentions. And so uh what you get is this like list of here's their top feature requests, the top feature request that was mentioned 242 times across all the different sources as

notification preferences and controls.

So you can click into that and see the actual like what did users say about that? And then you can look at these

that? And then you can look at these like deep dives that tell you within those 242 what were the common topics and use them as as filters um to look at

that. Um the and and you can do the same

that. Um the and and you can do the same thing with like uh like you can look at top complaints um in the product uh which of course is again too many

notifications but then pricing and uh communication things like that. You can

look at um to the contrary like what what are some things people actually like about the product? Um and that's usually helpful for like strategic planning to see like what what should we

double down on. Um yeah,

>> or like what brands came up frequently.

That's usually helpful to see how do um how do my customers talk about my competitors? That's often what comes up.

competitors? That's often what comes up.

Maybe there's like some big integrations that also also show up there. But then

the like to go back to um feature request. The cool thing is that because

request. The cool thing is that because this now is an amplitude, we can take these 242 users and create a cohort and

um look at how these users are using the product or look at related session replays of like how users are interacting with their notifications and and see both kind of what they did and

what they're saying in one place. or we

can create a survey and ask more questions about notifications and then that data will feed back into AI feedback and and be analyzed alongside of this. So it kind of like makes your

of this. So it kind of like makes your your feedback richer and richer. And

then there's like a few other >> Yeah.

>> Uhhuh.

>> Yeah. So that's those are kind of like the new the new parts. So like what what we didn't have in craft what we now we're able to do in amplitude because we have both pieces in in one place. Yeah.

And then of obviously like sometimes you'll just want to see like what are people saying on Twitter um and or what what are people saying like for my iOS app and so you can filter it in different ways or like look at the first

last day if you have >> I think um I I I I think combining that feedback with the metrics is actually pretty it's actually pretty amaz I don't I don't think there's another product that does this right is there I don't

think no I don't think there is >> yes there isn't there isn't we're the first we got we got to do the first because it's sort of like I mean It's it's a cool like that's that's essentially sort of like the the the biggest reason behind the the

acquisition is to be able to bring this all together in one place and really paint that whole picture. Uh which is really really cool.

>> Um >> that's awesome too. So I can make a cohort from any of these uh users and then I can look at all the >> funnel metrics or whatever I want to look at for that.

>> Yeah. Exactly. Exactly. And study how they're doing. Like you can look you

they're doing. Like you can look you could set up Yeah. You set up a funnel for for notifications uh notification management and and and look at that. Um

and yeah and and so and so like then some some other ways you can interact with this is that maybe you want to ask some very like specific questions like uh you may want to like the thing is

probably what we have because it's Slack and we we all use it. We know that there's a bunch of things about notifications from across like different things both feature requests and

complaints and uh various topics. But

like uh we could also ask what do users say about notifications.

>> Mhm.

>> So it's going to pull up all of that feedback and give us a summary of like what what users are doing with notifications. But then you can actually

notifications. But then you can actually take it one step further and say like take you using all this data about notifications. Now write a PRD for me

notifications. Now write a PRD for me based on all this data. And then it will write a PRD that you can go ahead and um and like edit in the product. Oh, there

it is. Okay. So yeah, so you can see kind of like critical issues, usability, frustrations, and then some positive notes. Um, so there's and then and then

notes. Um, so there's and then and then you can kind of like act on it and and do more.

>> That's great.

>> Yeah. So there's a few a few different ways to interact with this. Um, the what I just demoed is actually in our MCP. So

you can do this in Amplitude, but you can also just do it in in in Cloud or or Chat GBT and um and pull in that data in other places or like schedule uh notifications using agents and things

like that. So it's pretty there's kind

like that. So it's pretty there's kind of more things you can do with it beyond kind of what what automatically happen.

>> Okay. So I guess this like saves a bunch of time uh like okay number one saves time manually copy and pasting feedback to get AI to summarize stuff like this is all done for you already.

>> Yes.

>> Yeah.

>> Uh trying to classify into different uh categories that's done for you too.

>> Yes.

>> Um and then and then yeah even the PRD step you just have to get the cross functional alignment step done then then it's the perfect product.

>> Yeah. Yeah. Yeah, we just need Exactly.

We just need to replace all those humans with other AIS and then it's going to be so great.

>> This cross functional alignment. Yeah.

Like like a cross likelihood of getting cross functional alignment score or something. No, no, I'm just joking. I'm

something. No, no, I'm just joking. I'm

just joking. Yeah,

>> that would be great. That would be awesome. Yeah. And then so so the and

awesome. Yeah. And then so so the and the cool thing is that it does gather all this data every day. So you can filter it by just like the last day and look at what did people say based on my last launch or what did people say

>> in the past few weeks when you're doing sprint planning or like quarterly planning. So really like it pulls the

planning. So really like it pulls the data daily and it updates your list uh based on what's now showing up in the data.

>> Yeah. I I I can see a lot of potential to expand this too. Like I I feel like maybe eventually it can even do replies to the customers or something.

>> Yes. You know.

>> Yeah. Yeah. Yeah. Yeah. We we had that on our road map at Craft. So it's now it's now come come over to Ampatitude on our migrated roadmap.

>> Okay.

>> Definitely definitely want to be able to close the loop with customers.

>> It and it it is so easy, right? Because

you you're really you're getting these like uh we know like the 200 users that have requested this. So it it becomes really really easy to close the loop.

242. Like I built a much simpler version of this too. Uh sunrising feedback and um yeah I I feel like AI is actually even arguably even better for the qualitative use case than like the quant

stuff >> because the quality of stuff like you know there's like a lot of copy like it's really good at summarizing copy and like extracting trends and stuff and insights >> and and the the numbers is is like it's

not really good at math, right? It's

not.

>> No. [laughter]

Well, it's getting better. It's it's

getting much better at math. But you're

absolutely right. It got better at summarization before it got better at math. You know, I think for me having

math. You know, I think for me having having built like the first prototype of this like in early 2020 uh and been and seeing the evolution of LLMs from like

from the early days, it was terrible at text summarization back then. It was

really good at text generation, but it was terrible at text summarization. And

so eventually this was sort of like an unlock that that came I would say sort of like >> probably like early early 2023 late 2022

I say probably the Vinci model like right before the the the GPT 3.5 which is like the chat GPT the original Chat GPT model that's when summarization use

cases started really working somewhat it was still really really hard you had to do a ton of prompt engineering to to get it to do something and so math I feel like we're just we're just at the cusp of like it actually getting better at

math or it's actually getting better.

Like coding was like probably like last year was when all the coding use cases started to get unlocked. It's just like we're just seeing this like we're on this path of model capability going from

AI not working at all to like AGI. So we

will like with every new model that comes out there's always like this capability unlock which is really cool to see. Yeah, that's actually good

to see. Yeah, that's actually good because my my last question for you was going to be about like you know why haven't we seen like you know like a $29 billion AI analyst company yet or or something or maybe [laughter] we have but but like

>> but but maybe the answer is just like you know the model if the models catch up then like you know it will just unlock a whole bunch more use cases >> for exactly I I think that's exactly it I think I mean I think that's half of

the half of the answer is that that it's been like the the the models that we'll be able to new AI analytics really have just started to come out like the I

think that that those use cases are just starting to get unlocked like this is the time to build that um and we're starting to see that more and more like the stuff we're building now that of amplitude things are just starting to

work in ways that I just know that like a year ago certainly couldn't have worked um but that I think that's part of the problem I think the other piece of the problem is user adoption uh

because the persona that tends to use analytics has built up a lot of workflows around that in a certain way.

That's harder to replace and harder to like change those those workflows than it is to replace a workflow that just has to deal with like document editing or you know like writing copy or something like that. That's just like so

much so much easier. Like it's like workflows where folks are like writing blog posts or something like that's just like such an easy thing to to come and replace whereas folks have like built up so much context around how they're doing

their data analysis. M

>> I think that that's a much harder like user adoption problem that again we're starting to see that with like early AI adopters within those spaces are starting to try to think about how would I use AI for this but I think it's just

like it is it is a more difficult puzzle to get get adoption there.

>> I feel like a lot of data scientists get asked by their annoying PMs like hey what about this data or what about that data or can do can you run this query and and like I feel like that kind of stuff hopefully can get automated first so that they can >> yes

>> do more interesting work. Totally.

Totally. Yes. Yeah. Exactly. So yes, so that's a but I think like for it to be a really big solution, it needs to be able to do all those things and for all those

people. Um so that's that's partly like

people. Um so that's that's partly like you know why why why not like that really big like uh uh disruption hasn't happened. I think that that's that's the

happened. I think that that's that's the reason.

>> Yeah. And and also another reason I think is like you know just like you can't get the data wrong, right? Like

you can't really have too much on day data. You got to get the data correct

data. You got to get the data correct like almost 100% of the time.

>> Yeah. [laughter] Yeah. Exactly. Exactly.

No, the the quality bar is super super high. It's really really important that

high. It's really really important that it's actually correct. Like you can't be like, "Oh, you know, we had uh like I don't know your your daily active users were off by just a few digits, but didn't really matter, right?

>> Like it's so important."

>> Yeah. It's it's like the AI companies claiming they have 100 million AR. Maybe

Oh, it's actually 10 million AR, not >> Yeah. Yeah. Exactly. Exactly.

>> Yeah. Yeah. Exactly. Exactly.

>> Yeah. Yeah. Um

>> they're probably just using AI. It's not

their fault. Yeah. It's not their fault.

It's not their fault. Yeah. [laughter]

Okay, cool. All right. Well, Jana, I mean, you're really busy, but uh where can people find you and your product?

>> Yeah, mostly right now we we we've gone all AI native and we're on Twitter all the time. Um and so, um you can follow

the time. Um and so, um you can follow along Amplitude HQ on Twitter. Um and

then I am I'm there as tweets. I need to update my handle to be like Jana X, but I sound so bad.

>> So, I wouldn't do that. And yeah, we're all now trying to be on on Twitter quite a bit. Our um founder CEO Spencer Skates

a bit. Our um founder CEO Spencer Skates is on Twitter. You can follow him along.

Um he's just Spencer Skates is the handle.

>> Yeah, I heard he really wants more followers. So maybe uh everyone watching

followers. So maybe uh everyone watching this >> should go follow follow him and say this episode.

>> Yeah, then then he'll compliment Yana for for sure. Yeah. [laughter]

>> Cool.

>> Love it. Thanks, Skater. This is like it's exactly what I needed.

>> Okay, cool. All right. Well, thanks so much for your time. This is awesome conversation.

>> Really really enjoyed it.

Loading...

Loading video analysis...