LongCut logo

How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)

By Lenny's Podcast

Summary

Topics Covered

  • Ship Weekly or Die: The New PM Playbook
  • Product Taste Is the New Engineering Skill
  • Build for Today's Model, Not Tomorrow's AGI
  • The Model Will Eat Your Harness for Breakfast
  • Automate to 100% or Don't Bother

Full Transcript

I think it is very hard to be the right amount of AGI pled. It's very easy to build the product for the super AGI strong model. The hard thing is figuring

strong model. The hard thing is figuring out for the current model, how do you elicit the maximum capability?

I've never seen anything like the pace you folks at Anthropic are shipping at.

We want to remove every single barrier to shipping things. The timelines for a lot of our product features have gone down from 6 month to 1 month and sometimes to even one day. You're

interviewing hundreds of PMs and you just keep feeling like they're approaching it very incorrectly.

The PM role is changing a lot. It's

changing really quickly. The thing that is extremely important for building AI native products is iterating so quickly, figuring out a way for you to actually launch features every single week.

What do you think are the emerging skills PMs need to develop?

It comes back to product taste. As code

becomes much cheaper to write, the thing that becomes more valuable is deciding what to write.

Today my guest is Cat Woo, head of product for cloud code and co-work at Enthropic. Cat is at the center of

Enthropic. Cat is at the center of everything that is changing in AI and product and building. And she and her team are building the product that is most changing the way that we all build

our products. She is so full of insights

our products. She is so full of insights and wisdom and lessons. This is an episode you cannot miss. Before we get into it, don't forget to check out lennisprobass.com

for an insane set of deals available exclusively to Lenny's newsletter subscribers. With that, I bring you Cat

subscribers. With that, I bring you Cat Woo.

Cat, welcome to the podcast.

Thanks for having me.

I have so many questions. I'm so excited to have you on this podcast. I want to start with giving people an understanding of your role alongside Boris. Uh, everybody knows Boris. This

Boris. Uh, everybody knows Boris. This

he's His episode is the number one most popular episode on this podcast. No

pressure. He uh created Claude Code. He

leads the team, ships uh a bazillion PRs a day from his phone. Just like I don't even know what the number is anymore. I

think people don't give you enough credit for the success that Claude Code has had and co-work and all the things you all are building. help us understand your role on the team, how you work with Boris, how you split responsibilities,

just like what does the PM role look like on on the CloudGo team?

I feel very lucky to work with Boris.

He's been an amazing thought partner.

He's our tech lead. He's very much the product visionary and he is great at setting like this is what the product needs to be in like three months, six months from now. This is like what the

AGI pill version of the product is. And

a lot of my role is figuring out okay what is the path from where we are today to like that vision 3 to 6 months from now. And I I spend more of my time on

now. And I I spend more of my time on the cross functional. So making sure that our marketing team, sales team, finance, capacity, etc. are like bought

in on the plan and that we're all rowing the same direction and that once the feature is ready that there aren't any blockers to shipping it. I think in many ways it works well because we kind of

like mindmeld but it is actually like remarkably blurry of a line. Like I

think we're like 80% mind-l and then there's like this 20% of things that like maybe I care a lot more about them for us. So like I'll drive those and

for us. So like I'll drive those and then like 20% where he cares a lot more than me and he just like drives those.

This episode is brought to you by our season's presenting sponsor work OS.

What do OpenAI, Anthropic, Cursor, Verscell Replet Sierra Clay and hundreds of other winning companies all have in common? They are all powered by work OS. If you're building a product

work OS. If you're building a product for the enterprise, you've felt the pain of integrating single signon, skim, arbback, audit, logs, and other features required by large companies. Work OS

turns those deal blockers into drop-in APIs with a modern developer platform built specifically for B2B SAS.

Literally every startup that I'm an investor in that starts to expand up market ends up working with Work OS. And

that's because they are the best.

Whether you are seedstage startup trying to land your first enterprise customer or a unicorn expanding globally, work OS is the fastest path to becoming enterprise ready and unblocking growth.

It's essentially Stripe for enterprise features. Visit works.com to get started

features. Visit works.com to get started or just hit up their Slack where they have actual engineers waiting to answer your questions. Work OS allows you to

your questions. Work OS allows you to build faster with delightful APIs, comprehensive docs, and a smooth developer experience. Go to works.com to

developer experience. Go to works.com to make your app enterprise ready today.

Something that you shared actually before we started recording is the fact that you're interviewing hundreds of PMs all the time. Like if I had a nickel every time someone asked me for an intro to someone at Anthropic to go work at

Anthropic as a PM, I'd be I'd be I'd have 30 billion in ARR.

It's just like the number one place people want to go work at. So, I can only imagine how many PMs you're interviewing. You told me that you're

interviewing. You told me that you're just seeing people doing it, doing it wrong, the way they're approaching what they think it takes to be a successful AIP PM. Talk about what you're seeing

AIP PM. Talk about what you're seeing and what people need to understand about what it is, what it takes to be successful these days.

I think before AI, technology shifts were a lot slower. So, you could plan on the 6 to 12 month time horizons. And

because you were shipping features at a bit of a slower rate, there was a lot more emphasis on coordinating with all the other partner teams to make sure that they're shipping features that

unblock your features because code at that time was very expensive to make.

Um, I think now with AI and with how much that has accelerated engineering and with how quickly the model capabilities are improving, the timelines for a lot of our product

features have gone down from 6 months to one month and sometimes to one week or even one day. And with that, we actually need to make sure that products ship

quite quickly. And what that means is as

quite quickly. And what that means is as a PM, there should be less emphasis on making sure that you're aligning your like multi-quarter road maps with your

partner teams and more emphasis on okay, how can we figure out the fastest way to get something out the door? How can we figure out how to make like a concept

corner of our product suite where we can just an engineer has an idea or a PM has an idea and like by the end of the week we are able to get into our users hands.

I I think the PMs who do the best on AI native products are are the ones who can figure out how can I like shorten the time from having this idea to actually getting the product in the hands of

users and help define what are the most important tasks that need to work out of the box for my product. So, what I love about this is what you're saying is just like people haven't grasped how fast

they need to move and what how much of the job now is just moving is helping the team move fast. What what helps do that? What do you what do you do? What

that? What do you what do you do? What

does your PM team do to help them move this fast other than have access to the the most advanced models? I think the first thing is to set clear queer goals because LMS are so general that actually

creates a lot of ambiguity in who we're building for, what problems we're trying to solve, what the top use cases are.

And so I think a great PM is able to say, okay, our our key user is professional developers. Uh the main

professional developers. Uh the main problem that we want to solve for this feature is maybe there's like too many permission prompts and people are feeling fatigue. And like the the use

feeling fatigue. And like the the use case is we we want professional developers at enterprises to safely get to zero permission prompts. And that

actually sets a pretty clear goal because it it rules out a lot of potential approaches for reducing permission prompts so that people can uh get a lot more done with one prompt. And

then I think the second thing that's very important is figuring out some repeatable process for getting these features shipped. So uh for cloud code

features shipped. So uh for cloud code what we do is we actually ship almost all of our features in research preview.

We clearly brand this um when we ship something so that users know that this is an early product. This is just an idea. This is just something that we're

idea. This is just something that we're trying to get feedback on and iterating on and that this might not be supported forever. And what this does is it

forever. And what this does is it reduces it reduces our commitment for shipping something. We can just get

shipping something. We can just get something out in a week or two. And then

the third thing that a PM should do is help create the framework for the team so that they know when to pull in cross functional partners and what those crossunctional partners expectations are. So for example, we have a really

are. So for example, we have a really tight process between engineering, marketing and docs. So when engineers have a feature that they feel is ready and that we've dog fooded internally,

they post it in our evergreen launch room. And then Sarah who leads our docs

room. And then Sarah who leads our docs and Alex who leads PMM and Tar and Lydia on Devril just like jump in and can turn around the the marketing announcement

for it the very next day. And because we have this really tight process it lowers the friction for any engineer to ship something and PM is the role that should be setting this up.

How do PRDs fit into this? The fact that you said that goals are a really important part just like being aligned on what does success look like? Who is

this for? Who's this not for? Are you

writing PRDs? Is it just like a couple bullet points? How does how's that

bullet points? How does how's that evolved in the the world of a BM?

So there's two two things that we do.

One is we have very rigorous metrics and we do metrics readouts with the entire team every week. The goal of this is to make sure that everyone deeply understands all the facets of our

business. What our key goals are, how

business. What our key goals are, how they're trending, and what drives them.

The second thing that we do is we have this list of team principles. And this

includes who our key users are, why those are our key users. And the reason that we articulate all of this is so that everybody on the team feels like they understand how our business works.

They understand what's important to us and what we're willing to trade off. And

it lets people make decisions by themselves without feeling like they're blocked on PM or any other stakeholder.

I love how so much of this is like, okay, we still need PMs in the future.

There's so much talk of like why do we need PMs? We're just going to ship and

need PMs? We're just going to ship and build. We need engineers.

build. We need engineers.

Oh, we actually do PRD sometimes. So I I think for features that are like particularly ambiguous, it it does help to write out just a one-pager on what the goals are, uh what the delightful

use cases are, what the failure modes currently are that we need to fix. And

there are occasionally some projects, especially things that require heavy infrastructure that do take many months.

And for those situations, we do write PRD still.

I want to drill a little bit further into just how you're able to move so fast. I've never seen anything like the

fast. I've never seen anything like the pace folks at Anthropic are shipping at like someone made this calendar of launches across Anthropic and it was literally every day there was like a

major feature or product. So, one

question people had online is uh you guys just launched this uh inc not launch but built this incredible model mythos that is still in preview because it's so powerful people are a little afraid of what it can do. Have you guys

been using this? Is this part of the reason you've been able to move so fast?

We've been moving pretty fast for several quarters now. So, I think it it's not fully mythos. Um mythos is an incredibly powerful model. But we do use

the models internally and I think this has increased our rate of shipping a little bit but I don't think it explains the bulk bulk of the increase. I I think a lot of it is the process and the

expectation on the team. So we're very low on process. We want to remove every single barrier to shipping things. We

want to make sure every single person on the team feels empowered to take their idea from just an idea to like out in the world in less than a week, sometimes even in a day.

Cool. Oh man, what a what an advantage to have the best model and also be building product. That's so cool.

building product. That's so cool.

We are very lucky to be able to work with the Frontier models.

Oh my god, what a what an awesome advantage. Just like build a thing and

advantage. Just like build a thing and then use it and then accelerate faster.

It's so interesting. There's a couple like these other side things I want to just kind of go on these like side quests on this conversation. There's so

much happening with Anthropic and I just I'm so curious to get your insight. One

is uh a week ago or so the whole source code of cloud code leaked. Somebody got

it out there. I think it was a mistake someone made. Is there anything you

someone made. Is there anything you comment there just like what happened?

What went wrong? What should people know?

So we immediately looked into this when we saw it. Um we realized that this was the result of human error. There was um a human working with claw to write uh

PR. This was just an update to how we

PR. This was just an update to how we release our packages and it actually went through two layers of human review.

And so th this was a result of human error and we've hardened our processes to make sure that it doesn't happen in the future. Is this person still at

the future. Is this person still at anthropic? Are they doing it right?

anthropic? Are they doing it right?

Yes. Yes. It's it's a process failure and the most important thing is to just like learn from it and to add more safeguards so that doesn't happen again.

And so that's that's what we've been focused on and most of those have shipped.

Okay. Uh another question I had is open claw. Uh so recently there's been this

claw. Uh so recently there's been this move to keep people from using claude subscription with their open clause.

People get got really upset. that

they're confused why this is happening.

It feels like you're there's like, you know, harm caused to the open source community. What what do people what do

community. What what do people what do people need to understand about kind of what went into this decision?

So, we've been seeing a lot of demand for quad and we've been working very hard to both scale our infrastructure and also to make our harness more token efficient so that you can get more usage

out of it. It wasn't designed for third party products which have different uh usage patterns than our first party ones. We spent a bunch of time

ones. We spent a bunch of time trying to figure out what is the most seamless transition that we can offer.

And so I was very happy to be able to say that everyone gets some credits alongside their subscription. But yeah,

we we did have to make the hard decision that we needed to prioritize our first party products and our API. And so this is this is a decision that resulted from that. Yeah, this like to me it makes so

that. Yeah, this like to me it makes so much sense. Like you guys are

much sense. Like you guys are subsidizing this usage at like 200 bucks a month and there's like it's like basically unlimited use of this and like I think people don't understand businesses are trying to make money.

We're trying to be profitable here. We

can't just like give away compute when it's so in demand. So I get it. Coming

back to the PM team, what is just like the PM team look like at Enthropic? How

many PMs are there? How are they kind of organized?

Yeah, so we have a few PM teams. Um I think we're maybe around 30 or 40 PMs right now. Uh so we have the research PM

right now. Uh so we have the research PM team uh who Diane leads and this team is responsible for understanding all of the feedback from our customers for our

models and then feeding that to the best research team to act on it and they also shepherd the model launch. Um there's

the cloud developer platform team that maintains the APIs that cloud code is built on top of and they also release things like managed agents which is a

way for you to build your agents and we can host it on your behalf. And then

there's cloud code that works on both cloud code and the co-work core products. There's enterprise that helps

products. There's enterprise that helps make cloud code and co-work easier to adopt for all of our enterprise customers. And so this is everything

customers. And so this is everything from like cost controls, arvback, security controls and just making sure that these enterprises feel very confident and comfortable uh using using

our tools and then we also have our growth team that is responsible for growing across our entire product suite.

So we work very closely with them on cloud code and co-work growth and I know they also work with um our other teams on C CDP growth. So growth of people who use the cloud API. So speaking of

growth, so Amole was just on the podcast. He had this really interesting

podcast. He had this really interesting insight that most people haven't been sharing. There's always this sense that

sharing. There's always this sense that we need fewer PMs in the future. What's

the why do we need PMs? Engineers can

just ship. Uh his take is that because engineers are moving so fast, PMs and designers are squeezed. There's less

time to stay on top of everything that is happening. Every there's a feature

is happening. Every there's a feature shipping every day. So his take is he needs more PMs because it's hard to keep up. What's your take there? Do you feel

up. What's your take there? Do you feel like there will be an increase in hiring of PMs? What do you think is going on

of PMs? What do you think is going on with the PM profession long term?

I think all of the roles are merging.

PMs are doing some engineering work, engineers are doing PM work, designers are PMing and also landing code. You can

either hire a lot more engineers who have great product taste or you can uh keep your engineering hiring the same and hire a lot more PMs to help guide

some of their work. Um on our team we're pretty focused on hiring engineers with great product taste. This this way we can reduce the amount of overhead for

shipping any product. Like there are many engineers on our team who are fully able to end to end go from see user feedback on Twitter through to like ship a product at the end of the week with

almost no product involvement. And this

I think is actually like the most efficient way to ship something. So I I think like engineer and PM are kind of overlapping and you will get a lot of

benefit from having more of either. I

think product taste is still a very rare skill to have and we'll pretty much hire anyone who we feel has demonstrated this strongly.

And your background was in engineering, right?

Yeah, I was an engineer for many years.

I was then a VC very briefly uh before joining anthropic and actually almost all the PMs on our team have either been engineers or ship code uh here on cloud

code and so that that's one of the things that I think helps build trust with the team and also just enables us to move a lot faster and then actually our designers also have been

front-end engineers before wow because that's that's the big question like there's definitely this merging that's happening the ven diagrams you're combining. I think the big question for a lot of people is if you're coming from engineering or

product or design, which of those core skills is going to be most valuable? I

could see it anthropic and on cloud code, engineering is very valuable. I'm

curious if other companies, if you have a design background, becoming a PM is more valuable or just a PMP.

I still think it comes back to product taste. Like as code becomes much cheaper

taste. Like as code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write. Like

what is the right UX for this feature?

What is the most delightful way that a user can experience it? What like we we get tens of thousands of GitHub issues

asking for every single thing under the sun and it takes a lot of care and taste to figure out okay which of these is worth building and what is the right way

to build it and I think that that skill set can come from any background but I think that's the most important thing. I

think the reason why an engineering background is particularly useful at least for the next few months is if you have an engineering background, you have

a better sense for how hard something should be. And that's often a factor in

should be. And that's often a factor in what you choose to build. So like if something is very easy to build, then maybe instead of debating it, you just spend an hour doing it. But if something

is harder to build and you know that upfront that you know that okay uh this will just like cost a lot more for for our team to get this out the door. So it

helps a bit with the prioritization.

You said uh in the next for the next few months is that just like because the models will get so good potentially in the next few months. You may not even need to know that as much. I think the

valued skill sets does change quite frequently and so it's really hard to predict more than a few months out. So

it's less a commentary on what shift I think will happen and more of a commentary that I think large shifts will happen.

So you're not saying that's when mythos comes out and we'll change everything and that we don't need to know anything about engineering. No, I'm just saying

about engineering. No, I'm just saying that every every few months it seems like there's a yeah, there's a large increase in coding capability which then changes what other

roles are valuable. I think the the most important thing is to be able to to have this like first principles

thinking where you can figure out how the tech landscape is changing what the team really needs from you and to like

jump in and fix that hole because I think the work is becoming more amorphous which means that a great PM is able to understand what all the gaps are

to figure out what the highest priority ones are and then to just like figure out okay how do I learn that skill set or what is like the skill set that I have that I can like apply to this

challenge. So I I think the current

challenge. So I I think the current environment values people who are who are able to wear a lot of hats are able to swap them and are like very low ego

about what work they do to help the team move faster.

I love this answer. There's this

question I've been asking people in your in your shoes, folks that are kind of at the bleeding edge of what AI is capable of and building with the latest tools, which is just like where will human brains continue to be useful and

necessary for a while until we get to super intelligence. What I'm hearing

super intelligence. What I'm hearing here is essentially picking the things to work on, knowing where the market's going and figuring out where what to prioritize essentially. And then it's knowing if

essentially. And then it's knowing if the thing you've built is good and right and getting it out there in some early version at least. Does that sound right?

Is there anything else of just like where human brains will continue to be useful for at least the next few months?

I think humans still provide a level of common sense that the models don't.

And there's like a thousand moving pieces to any product launch. Some of

them are very small, but there's always a lot that could potentially go wrong.

I think the model doesn't always have a great sense of who all the stakeholders are, how they relate to each other, what their preferences are, what are the right venues to communicate with them to

keep them on board. I think a lot of this like more tacic common sense like EQ kind of knowledge is is still very valuable. Of course, we want the models

valuable. Of course, we want the models to get better at this and I think they will be, but right now I think there's still gaps. How do you just kind of deal

still gaps. How do you just kind of deal as a human going through so much constant change just like just being on the inside of the tornado? Maybe it's

calm there, but just like how do you how do you stay on top of what's going on?

How do you stay sane through all this craziness that we're moving through?

I think our team is full of people who lean into the chaos. So, we try to face every challenge with a smile because there's always so much going on. There's

all there's always so many risks and tricky situations that you know if you get too stressed about anything you'll burn out. And so we really look for

burn out. And so we really look for people who can kind of like look at a challenge be like that's going to be hard but I'm excited to tackle it and I'm going to do

the best that I possibly can and I know I won't be perfect but I'll be able to sleep at night knowing that I did my best. That's an interesting answer to

best. That's an interesting answer to just like what skills will be important in this future because it's I forget who said this, maybe Ben man that this is the most normal this is the world will ever be.

Yeah, it definitely gets harder. Like I

feel like there are a lot of weeks where maybe Sunday night there's some like P 0 and then by Monday there's like a P 0 and by Monday afternoon there's a P 0000 and you're like wow, I can't believe I

was so worried about that P 0 from Sunday.

But I think you just have to acknowledge that there's only so much that you can do that you need to sleep well so that you can make good decisions next day and just like brutally prioritize where you spend your time. What's the most

important thing to get right? And be

okay letting things go. Like there's

there's products that we ship that aren't as polished as I wish they were.

But you know, our our top goal is to help empower professional developers.

And if a product isn't successful, as long as it's not blocking the core use case, it's okay because we'll hear the feedback and we'll fix in the next release. Launching a feature that is

release. Launching a feature that is buggy is the kind of thing that would have kept me up at night. But it is something that I am now able to like live with knowing that okay, we're going

to get that quick feedback and we're going to fix it in the next release.

What I'm imagining is there's that gift, I think it's maybe from Pirates of the Caribbean, where it's this guy walking down a pair of stairs on a ship and the whole ship is just being demolished around him and he's so chill, just strolling down the staircases,

everything's falling apart. And that's

interesting because everyone I've met through from Anthropic is just so chill and just so like optimistic.

Yeah, that's I think that's a really interesting insight is just like having this calmness and optimism versus just like, oh my god, everything's crazy and going going nuts. Yeah, I think if you

don't have it, you'll get pretty burnt out. I I think we also tend to hire

out. I I think we also tend to hire people who have been in the industry for a while and have experienced lots of ups and downs and have a good sense for what

gives them energy and how to maintain their energy over time and I think that's helped us a lot.

So interesting. Something that I wanted to ask about is so there's these roles blurring. Engineers are becoming PMs,

blurring. Engineers are becoming PMs, everyone's dogs are cats, everyone's everyone. What what do we lose in that

everyone. What what do we lose in that in that world? Do we lose like career ladders and clear career paths? Do we

lose design consistency, code quality?

You know, there's probably some downsides. What are some things you find

downsides. What are some things you find are just like, okay, that's something we're sacrificing for the greater good.

We're sacrificing product consistency.

Historically, when code was expensive to write, you would carefully plan out everything in your product suite, how every product relates to each other, what the use case for every single one

is, how they integrate, and you would pretty much have one product for each use case. And now with AI moving so

use case. And now with AI moving so quickly and with so many ideas that we need to test out, we do sometimes have

features that overlap with each other. A

lot of the times it's because there's two form factors that we love internally and we want to we want the external audience to tell us which one is better.

What that means for someone who's a new user though is a new user might not know okay what is the best path to accomplish

X. There is more education we need to do

X. There is more education we need to do to help people understand what the core features are and what the best practices are for using them. I I think this is

the this is the cost of launching a lot of features. Um I think users also feel

of features. Um I think users also feel like it's hard to keep up with the latest.

Usually in traditional PM you ship a feature every like month or quarter. And

so it's really easy for a user to to understand okay I just need to check in on this once a month and I'll learn some new things and if I ignore it for six months it's fine. I don't feel like I'm

missing out. I think with these agentic

missing out. I think with these agentic tools, not just called code and co-work, but like across the whole ecosystem, people feel this need to like check Twitter every single day to see what the

absolute latest thing is.

And I think there's more we can do to help people feel less like they're on this ever increasingly fast treadmill

and that they feel like I I would love people to feel like they can just open these tools. The tools will educate them

these tools. The tools will educate them um or like teach them what they want to know and that they can just feel more bought along.

Yeah, I saw you launch this really interesting feature the other day. I

think it's / powerup where it basically walks you through all the cool ways and all basically all the best practices to use cloud code. Is that kind of along these lines?

Yeah, exactly. So, in the past, we didn't actually want to do something like PowerUp because we felt like the product should be intuitive enough that you can that you don't actually need to

go through any tutorial. And over time, we've just realized that there's just so many features and there's so much demand for a built-in onboarding experience that we we diverged a bit from our

original principle saying no no onboarding flow and added this because there's just so many users who wanted to know there's 100 features. What are the 10 that I absolutely need to use? And so

we put that together.

Yeah, it's such a bizarre world. So

Anthropic has been really successful with B2B enterprises where traditionally you don't launch a bunch of stuff. you

just kind of have a quarterly release maybe and it's like the opposite of every day we got something new. So just

maybe following that thread the run anthropic has been on is just otherworldly. Anthropic was way behind

otherworldly. Anthropic was way behind when it started. It was all shared this just like one of the least funded companies. Didn't have distribution.

companies. Didn't have distribution.

Wasn't the first to go. Openai was way ahead. It was just like no way Anthropic

ahead. It was just like no way Anthropic has any chance to compete significantly long term. Now it's just killing it.

long term. Now it's just killing it.

just beating the biggest companies teams with so much just like the growth is just uh like 11 billion dollars in ARR in one month% growth by the time this

comes out it probably be even higher just being on the inside what what are some ingredients that have allowed Anthropic to be this successful and kind of come from behind and do this well

the two most important things are one this unifying mission it's hard to state how important this is. We hire people

who care most about bringing safe AGI to all of humanity. And this is actually something that we reference frequently

in our decisions about what our entire product or should focus on shipping. And

because we put this like mission above any individual product line, we're able to make very fast decisions that cut across the entire org and like execute

on them in a unified way. So I think this is this is like something that I've never seen at a company of our scale.

And so just to make sure that's clear.

So essentially having the the number one mission is safety alignment, making sure AI is good for the world. And you're

saying just having that as a clear mission makes decisions a lot easier to make.

If there's two competing priorities, we'll talk about which one is more important for Anthropic's mission. And

it makes it a lot easier to decide which of the two we prioritize. And then

everyone will stand behind the one that we decide. And so sometimes that means

we decide. And so sometimes that means that like, hey, we want to ship something on cloud code, but this other thing is more important. And so we depp prioritize shipping this and we just wait until later. What's really

interesting about that is that explains I think versus another company maybe rhymes with bopen bi uh did a lot of different things and what I'm hearing here essentially is like okay we're not

going to launch social network we're not going to launch uh a feed of interesting information because it's not aligned to this mission and and that has kept anthropic focused which is seems to be a

core ingredient to the success well when when I think about mission I think about putting anthropics goals ahead of any individual or or any

individual product. And so for me, it's

individual product. And so for me, it's I think the second thing that we're very good at is focus. I think mission to me is slightly different. Mission means

that teams are willing to make sacrifices that hurt their own goals and their own KRs in service of anthropics goals and anthropics KRs. And

anthropics KRs. And people are very happy to make those trade-offs. So like an extreme example

trade-offs. So like an extreme example is if cloud code failed but enthropic succeeded I would be extremely happy and like we're like the whole team is very

willing to make decisions that follow that chain of thought.

I don't know if you can talk about this in depth but do you feel like the open claw decision is a part of this just like okay this is not furthering the mission of enthropic we need to stop this because it's not working in the way

we want it to work. I think one of the most important things for Anthropic is to grow the number of users that we're able to reach. One of the ways that we're able to do this is with the cloud subscriptions with our first party

products and so we just very much want to double down on that, but that does come at the expense of third party products sometimes.

So we've been talking about cloud, co-work, all these things. Something

that I want to make sure people get and I'm curious just how you use these tools. So there's cloud code, there's

tools. So there's cloud code, there's cloud desktop, there's co-work. What's

the best way to understand when to use which? When do you use each of these

which? When do you use each of these three?

So, I tend to use uh cloud code in the terminal when I'm just kicking off like a one-off coding task and I want all of the latest features. Uh the CLI is our

initial product surface and it's also the one where our features often land first and so it's the it's the most powerful of all the tools. So that's

that's what I tend to use when I'm just like trying to kick off one or like maybe like a handful of tasks at a time.

I think desktop really shines when you're doing something that requires front-end work. And so one thing that I

front-end work. And so one thing that I love to do is to use our preview feature. So if I'm building a web app,

feature. So if I'm building a web app, I'll often use Cloud Code and desktop.

I'll have the preview pane open on the right hand side so that I can actually see the web app that I'm making in real time as I'm chatting with Claude. It's

also really great for people who want something a bit more graphical. Uh, a

terminal can feel very unfamiliar to someone who's nontechnical. Um, you get a bunch of these like scary popups on your machine and you can't click around the way that you're used to in pretty

much every other product that you use.

So, there's a lot of people who just like don't feel comfortable in terminal.

And if that's you, I would highly recommend checking out cloud code on desktop. Desktop is also great for

desktop. Desktop is also great for getting an at a glance view of everything that's happening. So you can see your CLI terminal sessions in desktop. You can see your other desktop

desktop. You can see your other desktop sessions. You can see your sessions that

sessions. You can see your sessions that you kicked off on web and mobile. So

it's a one-stop control plane where you can see all of your tasks. I think the benefit of web and mobile is that it's really great for kicking things off on

the go. So CLI and desktop both require

the go. So CLI and desktop both require you to be on your local laptop. And this

is contravening because sometimes you're out and about, you're like touching grass, you're going on a walk and you don't have your laptop open and you don't I can't I can't count the number of people who I've seen like holding

their laptop open like tethered to their phone while they're outside. And this

just means that we're missing a product that solves that need. And so for for me, what mobile lets you do is kick off these tasks on the go so that you don't you don't need to bring your laptop

everywhere and make sure that your laptop's open wherever you are.

I love that. I've I've seen people on plane like it's just like such a meme now. Just I need to finish let this

now. Just I need to finish let this agent finish. I can't shut this down. I

agent finish. I can't shut this down. I

need Wi-Fi.

And then I think for co-work the the role that this fills is there's a lot of work that everyone does where the output isn't code. So whether that's like

isn't code. So whether that's like getting to Slack zero or inbox zero or whether that's creating a slide deck for some customer meeting that's coming up

or whether that's writing a quick doc on what the goals of a feature are or what the launch plan for a feature is. All

these tasks produce outputs that are non-code and co-work is best positioned for that. So the way that I split the

for that. So the way that I split the products in my mind is if I'm building something where the output is code, I'll use cloud code or desktop or cloud code

on mobile. And if the output is anything

on mobile. And if the output is anything that's not code, I'll use co-work for it.

People are just like sleeping on the success that co-work. It's just like growing incredibly fast and I think people still don't understand maybe what

it's for. And so what if you give us a

it's for. And so what if you give us a couple use cases just in your work as a PM? What are some like really

PM? What are some like really interesting maybe unexpected ways you use co-work to save you time, get more work done?

If you're getting started on co-work, the first thing that you really need to do is connect all the data sources that are relevant to your role because co-work can only do a great job if it

has access to all the context that it needs to be able to curate the output for you. So what that means for me is I

for you. So what that means for me is I connect it to my Google calendar. I

connect it to my Slack, to my Gmail, to my Google Drive so that it just knows it has the flexibility to find relevant context to ask questions to pull in

threads and this this like substantially improves the quality of the result. The

kinds of things I use it for are um like last night I was work where we have this code with cloud conference coming up and there's a few talks that I'm giving there and one of the talks that we're

doing talks about the the transition of cloud code from an assistant to like a full-on agent and one of the things that I wanted to do in this talk was to

showcase all of the products that we've been shipping that enable this transition and also to figure out okay what are the what are the success stories that people have had internally

that we can use as demos. And so I I have my Google Drive connected, I have Slack connected, um Alex, who's our product marketer, put together like a

draft of what the points that we that he thinks we should cover are. And so I just like fed this all into Co-work. I

told Co-work the narrative that I want to tell. And it actually just worked for

to tell. And it actually just worked for an hour. It it walked through Twitter to

an hour. It it walked through Twitter to see what we launched. It looked through our evergreen launch room. It looked in our Cloud Code announce channel, which is where our team posts demos of what

how they've been getting the most value out of Cloud Code. And it synthesized all this together to this 20page deck that I woke up to this morning and I read through it and it was like pretty good. There were there were a few

good. There were there were a few tweaks, so I did have to give it a round of feedback. I I like my slides to have

of feedback. I I like my slides to have extremely minimal words and it was a little too wordy, but you know, it it was far faster than like what I would be

able to produce. And because Co-work has access to our whole design system, it actually looks like an anthropic designer put it together. Like it when

you visually see it, you're like, "Oh, this is like incredibly polished." So,

uh these are the kinds of things that are so much faster. like this making this slide deck would have taken me hours, but instead it like turns out a draft that is actually quite good so I

could focus on making sure that the demos are amazing that we plug into it.

This sounds like a dream come true to PMs that putting decks together so annoying.

It's so slow.

I and I love people will see this deck whenever you present this. This will be out in the world to like obviously it's not the the oneshotted version, but you've iterated on it. So just to help

people try this for themselves. So step

one is connect their what did you say?

Slack. What else do you suggest they connect?

Slack, Google calendar, Gmail, G drive.

You should connect your communications tools and where you store your source of truth data for what your team cares about, what you care about, and what you're working on.

Okay. And then what was the prompt roughly that you put in there to generate this deck?

So I just wrote make me a slide deck for the code with cloud conference. This is

what our PMM suggested it should cover.

This is the current draft that I made that I don't like. This is one that I made manually that I don't like, but I linked it. Can you start by creating a

linked it. Can you start by creating a proposed outline with details? Also,

make sure it doesn't overlap too much with a keynote talk, which is more important. And then Claude read a bunch

important. And then Claude read a bunch of the links that I sent to it and created a proposed outline. So then I read through its proposal and all the

different ideas that it had generated for what we could cover and I just made a decision on what I wanted to actually be in the final deck. And I think this is like an example of what the role of

the PM still is today. It's like quad is a great brainstorming partner. It's able

to synthesize a massive amount of information really quickly and present all of the possibilities to you. But uh

the role of the PM is still to make the end decision of okay what what should belong in the final product. So for this what I ended up deciding was that I wanted the talk to talk to cover the

progression from making local tasks successful to making every PR green to like helping engineers land more PRs and for each of these which demo would be

the most compelling and then after this decision about the outline co-work just like went off for a few hours and built the whole slide deck.

This is so awesome. What a what an awesome part of the job to not have to do anymore. Uh, and it feels like you're

do anymore. Uh, and it feels like you're talking to essentially a deck designer that also has like actual knowledge about what you've worked on and and can like make it actually the content what

you want it to be, not just make it look really nice. How did you um how did you

really nice. How did you um how did you do the design system piece? How does

that work? How does it know the design system of Anthropic? So what I did for this is we actually already have like a standardized deck that we use across all of our external engagements. And so I

just gave Claude access to that. And so

it's able to see like what colors we use, what fonts we use, the different kinds of what's it called? Like slide formats that are possible. And so it has like 20 of these example slides.

Give an example. Got it. So you like upload here's our template work from this.

Yeah. You can also connect to like your Figma MCP if you if you have your slide format um saved there and it can pull that in.

Along those lines, something I'm always curious about is what's kind of in your in your stack of tools as a PM and anthropic obviously cloud code and co-work and all the anthropic tools.

What else are you using? What are the Slack you mentioned? Is there anything else?

So my stack is pretty heavily cloud code co-work.

Anthropic largely runs on Slack. Um, I

feel like it's like the core OS of our company and day-to-day like a a lot of I I would say maybe 30% of my

time is pushing the boundaries of what co-work can do so that I have a very strong

sense of what we're not good at. And

I spent a lot of time talking with the model to understand why it makes mistakes that it does. We actually have a lot of internal tools that we make.

Like I think one of the things that Cloud Code has really unlocked for our entire company is it really lowers the barrier to making any custom app that

you want. And so we we've seen this like

you want. And so we we've seen this like surge in personalized work software that people are building for like custom use cases instead of um using tools that

don't perfectly fit the use case.

I got to hear more. What are what are some examples? What are things you've

some examples? What are things you've built other people built that are really popular and useful?

One of the sales folks on Cloud Code, he he realized he was making these like repetitive decks over and over and over again. And so he actually has this web

again. And so he actually has this web app that he built with the examples of the core quad code decks that we know work well. So like a 101, 2011 and

work well. So like a 101, 2011 and mastering quad code. And then he has a way to input specific customer context that pulls from Salesforce that pulls from gong that pulls from other notes so

that we can customize the decks for specific customers. And so it'll pull

specific customers. And so it'll pull out things like okay this customer is using like bedrock or cloud called for enterprise or console which affects what features are available to them. Um it

will pull out things like okay this customer is concerned about like the code review stage of the SLC. And so

we'll add a slide about our code review features there. Um it'll pull out things

features there. Um it'll pull out things like okay this customer needs to be like HIPPA compliant or needs XYZ security controls. And so we'll make sure to add

controls. And so we'll make sure to add a slide or two in their deck about that.

And then for example, if if this is a customer that's on vertex or bedrock and doesn't want to use cloud for enterprise, then we'll just take out some of the slides that are called for

enterprise only features. And so

normally this is like manual work that could take 20 30 minutes or and so people either like spend that time doing it or they'll just decide not to do it and use the general deck. Uh

with this it takes like a few seconds and you get a tailored deck.

What's interesting about it's like Slack is like the tool that nobody's it's just like nobody's trying to create their own. Slack just continues to win and

own. Slack just continues to win and it's just like the way you describe it is kind of the OS of so many companies.

It's so interesting like people talk about Salesforce as just like SAS. We

don't need SAS software anymore. We're

going to build our own. It's like Slack is a durable tool that nobody wants to try to compete with and build a better version. I think it's pretty important

version. I think it's pretty important communications infrastructure and I think they do the core task of helping everyone get real-time updates incredibly well.

Yeah. Like people hate on Slack, but it's really great at what it's trying to do and like the most cutting edge teams are are hooked on it. So interesting.

Yeah. And I also love how custom how easy they've made to customize it. And

so it's we we love making Slack bots and th this kind of like hackability uh means that we're able to integrate with Slack the way that we want to. So really

appreciate Slack's work on that.

Time time to buy some CRM stock. I am so excited to tell you about this season's supporting sponsor, Vanta. Vanta helps

over 15,000 companies like Cursor, Ramp, Dualingo, Snowflake, and Atlassian earn and prove trust with their customers.

Teams are building and shipping products faster than ever thanks to AI. But as a result, the amount of risk being introduced into your product and your business is higher than it's ever been.

Every security leader that I talk to is feeling the increasing weight of protecting their organization, their business, and not to mention their customer data. Because things are moving

customer data. Because things are moving so fast, they are constantly reacting, having to guess at priorities, and having to make do with outdated solutions. Vanta automates compliance

solutions. Vanta automates compliance and risk management with over 35 security and privacy frameworks including SOCK 2, ISO 27,0001 and HIPPA.

This helps companies get compliant fast and stay compliant more than ever before. Trust has the power to make or

before. Trust has the power to make or break your business. Learn more at vanta.com/lenny.

vanta.com/lenny.

And as a listener of this podcast, you get $1,000 off Vanta. That's

vanta.com/lenny.

Okay. Uh so you talked about all these different teams that and how they use cloud code and co-work to operate. Which

teams do you find other than engineering? I imagine engineering is

engineering? I imagine engineering is the biggest token spender, but if not that'd be really interesting. What

what's kind of like the second place function right now for tokens?

Oh, applied AI is amazing at pushing the boundaries of what quad code and co-work can do. A a lot of our applied AI team

can do. A a lot of our applied AI team spends time with our customers helping them adopt our API. And so sometimes our applied team will for example make

prototypes on behalf of these customers which cloud code makes so much faster than it used to be. They they also have the dual goal of needing to manage a lot

of customer coms, a lot of like customer inbound and historical context call notes. And so they're both extremely

notes. And so they're both extremely heavy on co-work and on cloud code.

And just to understand applied AI, is that like is that like forward to play engineering sort of role? Like what do they how would you how would most people describe what applied the applied AI team is doing? Yeah, it's helping our

customers adopt the latest API and uh model features um across their company both for powering their company's products and also for internal acceleration.

Got it. So it's like customer success go to markety kind of like for deploy engineering sort of.

Exactly. It's like a very technical go to market person.

Got it. Okay. Awesome. So that's so you're saying that might be the second uh org that uses the most tokens.

Yeah. And then we we also see them pushing the boundaries of what co-work can do. So for example, if so a lot of

can do. So for example, if so a lot of these folks cover multiple customers and in any given day can have like five to

10 customer engagements on a high day.

And so what they often use co-work to do is the night before they'll ask it to summarize, okay, what are all my customer meetings that are coming up the next day?

um what are all the what are all the things that this customer has asked me for uh what's top of mind for them what are the action items from the past meetings and co-work will just put

together this like dossier this like brief of what they should be aware of going into the next meeting and co-work can also research answers so if if a customer asked okay when is feature X

going to launch um co-work can help the pi person research through Slack to get the latest ETA add that to the add that to the notes so that during the customer

call the pi person has the absolute latest and these are just workflows that people are building for themselves and sharing with other people on their team.

So cool something that kind of this question this trend uh I don't know question topic comes up a lot recently which is um token spend exceeding

people's salary where people just use AI and it costs more than how much they're making. Are there any numbers floating

making. Are there any numbers floating around anthropic of just like how much tokens spend say engineers uh spend I don't know a month a day PMs anything like that

it is clear to us that as the models get better people delegate far more tasks to it and they spend a lot more hours in tools like quad code and co-work and so

we do see the token cost per engineer or like per any knowledge worker increase every time that there's a model jump or like a substantial product improvement.

I think it's still much lower than what the average engineer salary is, but we see the percentage increasing over time.

It's such an interesting like we talked about how you have access to the most cutting edge models and other advantage of working anthropic. I I believe you guys have basically unlimited tokens.

You don't you can use as much as you want. Is that right?

want. Is that right?

We can use a lot of tokens. Some people

do run into limits. So,

okay, there's a limit. Okay, Baris, shut it down. H, okay. Like, it's so

it down. H, okay. Like, it's so interesting how many advantages come from having the most advanced model.

It's such an interesting like flywheel that starts to kick in. I think we also believe a lot in empowering our internal teams to build as fast as possible. And

we also trust that everyone understands how much capacity that serving these models truly costs. and we trust our team to use the tokens responsibly. So,

it's very frowned upon to waste tokens, but we do trust individuals to make that judgment call.

Awesome. Coming back to the PM role, you talked we talked a little bit about this, but I think this will be really interesting for people to hear. Just

what I want to understand is what do you think are the kind of the emerging skills that PMs need to develop slash you most look for AI companies most look

for when they're hiring PMs these days?

I think the hardest skill is being able to define what the product should look like a month from now. I think there's a lot

of ambiguity and what models are capable of in that timeline and how user behavior will change.

But I think there are patterns that the best PMs can see based on how users are abusing the limits of the existing product and the best PMS can sense that

can set a direction and can steadily execute towards it and change the path if the model capabilities are much better than or worse than what they had originally expected. I think it is very

originally expected. I think it is very hard to be the right amount of AGI pilled because I think everyone can see this like this future where the models

are extremely smart and can do almost everything in which case you actually don't need that complicated a product.

You can actually just have a text box again where you tell the model what you want. And it's so smart that it can add

want. And it's so smart that it can add any tool or add any integration that it needs to like get the job done. It knows

when it's uncertain. and they can ask clarifying questions like it's kind of very easy to build the product for the super AGI uh strong model. I think the

hard thing is figuring out for the current model. How do you elicit the

current model. How do you elicit the maximum capability? How do you help

maximum capability? How do you help users go get onto the the golden path? How do you like guide

users to interact with the model's strengths and like patch its weaknesses?

Th this skill is like pretty rare.

And how do you build that skill? Is it

just using each like basically understanding the limits of each model having like you talked about taste, understanding having taste into what the model maybe is capable of, what it's great and not great at, where it's changed.

I think it's spending a ton of time talking and using the model. One of the things I really like to do is to ask the model to introspect on its own

behaviors. So sometimes when I notice

behaviors. So sometimes when I notice that the model does something unexpected, like for example, there's like situations where the model will

make a front-end change and run tests but not actually use the UI. It's

actually pretty useful to ask the model to reflect on why it did this. And

sometimes they'll say that hey there was like something confusing in the system prompt or I didn't realize that um the front-end verification was like part of this task or hey I delegated the

verification to this sub agent and the sub agent didn't do the test and I didn't check its work. A lot of times just like being very curious about why the model made the decision that it did

will show you what misled it so that you can fix the harness in order to close this gap. The

other thing that helps is to figure out who the taste who are the users who you trust the most to give you accurate feedback about the model. Usually

there's like a handful of people who are much better than others at articulating what makes a specific model or model harness combination good. And there's a lot of people who will give you

feedback, but not everyone's feedback is as qualified. And so finding a group of

as qualified. And so finding a group of those like five people you trust is really important for getting very fast feedback. I think the third thing that

feedback. I think the third thing that is useful but not everyone loves doing is building evals. You don't need to

build hundreds of evals for them to be useful. Just building 10 great evals is

useful. Just building 10 great evals is important for helping the team quantify what the goal is and what their progress towards it is and what they're missing.

And so I think eval is this like underappreciated thing that more more PMs more engineers should be working on.

We've covered evals a bunch. There's

this trend of just like that is the future of product management is writing evals because it and essentially it's what does success look like? Okay, cool.

Let me actually concretely define it and then we'll know. How much of your time are you spending writing evals would you say?

I I think the importance of evals varies a bit based on the feature that you're working on and or like what the problem you're trying to solve is. So there are a lot of folks on our team who do spend

a lot of time working on eval. have a

small pod of folks who collaborate very closely with research to more precisely understand our quad code behaviors and what the

largest areas of improvement are and trying to measure those pretty concretely. I personally jump into evals

concretely. I personally jump into evals when there's a feature that I think needs a bit more product definition and

often the output of this is okay here are like five evas that I made um this is how you run them these are the ones that succeed and these are the ones that don't and this is like the prompt that

I've used to increase the success rate it varies a lot though based on the exact feature uh not every feature needs it but I think features such as memory benefit a lot from this uh point you

made about people being very good at evaluating models so interesting. It's

almost like a human eval of just like okay they understand where it's spiking or it's maybe lacking. Uh is there anyone specific that you want to shout out that's very good at this?

Uh two people who I think are incredible at this are um one Amanda who def who molds Claude's character. It's just like

such a hard role because the task is so ambiguous. Even coding is easier because

ambiguous. Even coding is easier because you can verify the success whereas crafting the character requires a very strong sense of conviction in what who

Claude should be. And I think she has like an incredible ability to not only mold the character, but also to like articulate what the goals are, what the

character, what's successful and what's not. The other group of people who I

not. The other group of people who I really trust is just like the Cloud Code team. Um, so we often have team lunches

team. Um, so we often have team lunches and whenever there's a new model we're testing. One of the fastest ways for us

testing. One of the fastest ways for us to get feedback is to just like at these team lunches just like go to every single person and just be like, "Hey, what is your vibe on the model?" And

oftentimes we'll we'll get feedback like, "Okay, this model is like not fully explaining its thinking. It's like too abrupt." or

its thinking. It's like too abrupt." or

like hey this model's like um just like loves writing a ton of memories but like we're not sure if the memories are high quality or not or like some people will notice that okay this this model loves

to test itself which is great or like this model isn't testing itself enough.

So that informs what data we look at to verify okay is this a larger pattern. So

we we have a ton of data but it is very hard to extract insights and so the the feedback from this group helps us inform okay what are the hypotheses we want to

test and then we're able to extract uh data to uh test that this point you made about the character of Claude I had Ben man on the podcast co-founder and he talked about this just

like the character the constitution of Claude is such an important part of of of Claude and I I didn't realize until afterwards just Like like people like with open claw actually one of the examp

one one of the reasons people are sad is like the personality of your claw is like because Claude's personality is so good and fun and and interesting unlike

other models and there's and the way he put it is the personality is what makes Claude so good at so many things. It

feels like this like trivial side thing.

Okay, it's going to be funny and interesting and talk in a fun way but it's like so core to the success of Claude. Is there anything you get there

Claude. Is there anything you get there about just like what people may not understand about why the character as you described and the personality is so key?

When you reflect on everyone you've worked with, there's just some people where you're like, I really like their energy. Like, I really like their vibe.

energy. Like, I really like their vibe.

And when people think about Quad and Quad Code, this is one of the things that people bring up the most where they just really love that COD is like it's

it's like lighthearted and fun. Um, but

it also is extremely competent at your task. People really like that Claude's

task. People really like that Claude's low ego. And so if you tell it, hey, you

low ego. And so if you tell it, hey, you did this thing wrong. It's like truly sorry. It's like, oh shoot, like, thanks

sorry. It's like, oh shoot, like, thanks for telling me. Like, let me fix it.

Let's work together. It's also very positive. So if you're feeling like, oh,

positive. So if you're feeling like, oh, this is like an insurmountable task. I

don't know h how to get started. Quad is

like, okay, it's okay. The these are like the steps that I think we should take. like, do you want me to get

take. like, do you want me to get started on it for you? I think part of what makes a great co-orker is this positivity, this like bias towards

action, this this ability to give you like earnest feedback, not just agreeing with every single thing that you say.

And so we try to imbue this into cloud because we think it makes it a lot more enjoyable to work with.

There's something I want to come back to. You talked about how when new models

to. You talked about how when new models come out, you often have to kind of revisit things you've built. That's so

interesting and so like frustrating maybe just like oh god damn it we shipped this thing now we have to rethink it. Talk about just like how

rethink it. Talk about just like how often you have to come back with a new model and we're like okay we have to redo this product that we launched a few months ago.

A lot of the changes that we make with a new model is removing features that are no longer needed. So a lot of times we

add features to the product as a crutch for the model because it's not naturally doing itself. So the classic example for

doing itself. So the classic example for this is a to-do list. When we first launched Quad Code, people would ask it to do these large refactors and Quad Code would say, "Okay, cool. I need to change these like 20 call sites and it

would go and change five of them and then stop." And then we were like,

then stop." And then we were like, "Okay, how do we like force it to remember to get every single one of these 20?" And so Sid on our team was

these 20?" And so Sid on our team was like, "Okay, what if we just like think about what a human would do? So a human would like make a list of everything that they need to change. Similar to how in VS Code you would look up all the call sites and it would be a list on the

left side and you would like go through them one by one and replace all. How do

we give this kind of like a tool to claude? And so he added a to-do list and

claude? And so he added a to-do list and we found that with that Claude was actually able to fix all these 20 call sites. But then with Opus 4 and later

sites. But then with Opus 4 and later models we realized that we didn't need to force it to use this to-do list. It

would like naturally use it itself. For

the earlier models, we had to keep reminding it, hey, did you finish everything on the to-do list? You can't

finish until you're done with everything on the to-do list. And for the later models, without prompting, it just like naturally thinks to do everything on the to-do list. Um, these days, the to-do

to-do list. Um, these days, the to-do list is still nice to have as like a user. Um, because then you can more

user. Um, because then you can more clearly see what Claude is working on.

But honestly, it's such a deemphasized part of the product right now that um, the model may use it, the model may not use it. it's like really not necessary

use it. it's like really not necessary for it to make thorough changes anymore.

I forget who said this on the podcast um that the model will eat your harness for breakfast. And what I'm hearing here is

breakfast. And what I'm hearing here is essentially you you remove things over time that you've had to add on top of the model where it was not operating the way you wanted. And essentially as the

models get smarter, you just it becomes simpler and simpler for it just to do the thing you want it to do.

Yeah. Um, we can move remove a lot of prompting interventions every time the model gets smarter. And we actually do this every time we launch a model. We

read through the entire system prompt and we reflect on, okay, for each of these sections, does the model really need this reminder anymore? And if not, we'll remove it. The most exciting thing

that new models unlocks though is just like entirely new features. So there's a lot of features that we've been testing out with prior models and the accuracy wasn't high enough for us to want to

launch them. And so one example of this

launch them. And so one example of this is code review. We tried to build a code review product a few times and we've launched like simpler versions of code review which is the slashcode review

command in the past and it was only with the most recent models that we felt like okay this code review is so good that our engineering team relies on this code

review to pass before we merge PRs and we found that this was we've always dreamed of quad being able to be a reliable code reviewer that can actually

that we can like confidently feel catches the majority of bugs. And it was only with like Opus 45 and 46 that we and uh Sonnet 4.6 that we felt like okay

we are now able to like run multiple code review agents simultaneously to traverse traverse the entirety of the

codebase and to synthesize a set of like real issues that an engineer needs to address before merge. And so this is like a new capability that the the newest models have unlocked.

This is another trend that is very common on this podcast of build something that will possibly be possible in the next six months. Be kind of at the edge of what's working sort of and then it'll catch up and then it'll be an

amazing product and you'll be ahead of everyone.

Yeah, exactly. Um it's pretty important to build products that don't necessarily work yet so that you know okay what is missing um for this product to work and

then with the newest model you can just swap it in to the prototype you've already made and see okay does this new model close that gap.

How much are you able to speak to just kind of where things are going with claude and co-work as kind of the vision of it? I imagine you don't want to give

of it? I imagine you don't want to give away too much about the goal but it feels like you're there's all these awesome features being added on top dispatch control from phone and all these mobile app all these things what's

kind of just like a way to understand the vision for all these things long term we think about this in terms of building blocks so for both quad code and co-work

the core building block is making individual tasks successful so you you want to produce some output you give it a clear prompt description is it able to

consistently produce acceptable output that you're able to either merge or share with your colleagues or external audience. So the task is the core

audience. So the task is the core building block. As the models get

building block. As the models get smarter, the task success rate gets a lot higher. And then we see people

lot higher. And then we see people moving towards doing multiple tasks at the same time. So multi-coding was this big thing in towards the end of 2025 and it's only increased since then. And so

we see this as okay great one task works and now you can do like six tasks at a time. As the models get even smarter the

time. As the models get even smarter the way that we are extrapolating this is okay next maybe you're going to run like 50 clouds at a time or hundreds of clouds at a time. And so what is the

infrastructure we need to build to enable that? At that point you're

enable that? At that point you're probably not going to run everything locally on your machine anymore. There's

just like not enough RAM to do it. And

so we're we're thinking about h how do we make it easier for you to manage all these? These will probably run remotely.

these? These will probably run remotely.

How do we build the interface so that you as a human know which tasks you need to look look into? How do we make sure that the agent is fully verifying work

so that when you look at a task and it says it's done, you like can very quickly verify and fully trust that it is done to your spec. and how do we make sure that this like process is

self-improving so that when you do see a task that isn't done to your liking, you can give it feedback and the model will know for every future run to incorporate that feedback so it never makes that

mistake again. So this is the

mistake again. So this is the progression that we're we're bringing our users along for.

There's a lot of people listening, a lot of product managers, a lot of maybe founders, a lot of other cross functional folks listening. There's a

lot of worry about just how their role just the future of their careers. What

advice would you have for just people to not just survive this transition to this very AIdriven world, but to be really successful to essentially just to thrive in this future? What are just like things people need to hear, need to be

doing?

I think AI gives everybody a ton more leverage than they used to. And so I would push you towards anytime you realize that you're doing some manual task multiple times, think about how you

can use cloud code, co-work or other AI tools to automate that for you. Most

people have like creative parts of their job that they absolutely love and then like tedious parts of their job that they really hate doing. I think the beauty of AI is that it can do those

tedious parts for you. it can learn from every time that you've done that manual task and generalize and then run it automatically and so that you can focus on the creative parts and that means you

can do a lot more than you used to be able to do. So I think my like immediate push for people is figure out the repetitive parts that you can pass to quad. Iterate on those automations until

quad. Iterate on those automations until the success rate is very high and then focus on okay what more can you be doing for your team for your product for your

company that like people haven't had the bandwidth to pick up so far or like what is that like pet project that you always thought the company should do that like

you've never had bandwidth to do. If AI

can take care of the like grunt work, then you have you have this extra 20% time now that you might not have before.

So, so my push is to lean into these tools, hand off the work that you're not excited to do, figure out how it can accelerate you, and then as a result, you'll be able to do so much more.

Something core to what you just shared, which I fully agree with, is find problems to solve with AI. There's all

this potential what all these tools can do. some of the hard like for a lot of

do. some of the hard like for a lot of people hardest part is just like what should I actually do and what you're saying here is just pay attention to things that you are doing constantly you can automate pay attention to just like ideas that have been floating around

that you haven't had time to do um it's basically it's like solve a problem for yourself is kind of the core advice there exactly I I would also push listeners towards focusing on bringing your

automations from okay this is a cool concept to like hey this actually works 100% of the time like sometimes I see users trying trying to automate something, getting it to like 90 95%

accuracy and then giving up on it. And

this if an automation doesn't work 100% of the time, it's not really an automation. And that last 5 to 10% does

automation. And that last 5 to 10% does take more time. Also, building the automation is often a lot slower than you doing it yourself. I would encourage

listeners to put in that time to scope some automation that you really want to get to 100%. Put in the elbow grease to teach quality your preferences to like

give it feedback so that it can improve its skill so that it can get to that 100%. And then like really then you'll

100%. And then like really then you'll be able to rely on it. There there's

just not much value in a 95% there automation.

I am super guilty of that. This is

really good advice for me.

I am guilty of this too. I've been

teaching it I've been teaching co-work to try to get me to inbox zero for Gmail and it has not been it it has been very time consuming and it is definitely not

there as you probably realized.

Yeah, I funny enough that's exactly where my mind goes. I have this uh workflow I set up where every email I get, it looks for things that are spammy, which is just like all these like, "Hey, can I come on your podcast

or what about this one?" Like all these things I'm just like, I don't have time for these sorts of things. And I have it categorized it into a folder called spammy. And it's just like it's 95%

spammy. And it's just like it's 95% great, but then there's like, oh wow, I missed an email because it went in there. So this is a good push for me to

there. So this is a good push for me to like I'm going to work on this. I'm

going to get it to perfect.

Yeah. We also are working on making the flow for customizing these commands a lot easier because right now I think you have to like know too many concepts. You

have to know to define a skill. You have

to know to like use this skill and give it feedback. And then you have to know

it feedback. And then you have to know to tell co-work to update the skill based on all the feedback that you gave.

And then you also have to know where to read the skill to like make sure that the feedback was incorporated the way that you want. The it's also our job to make this flow really seamless so that it doesn't feel painful to do.

Amazing. Is there anything else, Cat, you wanted to share? Anything else you wanted to leave listeners with? Anything

you wanted to double down on that we haven't already touched on before we get to our very exciting lightning round? I

see a lot of people playing around with AI um and building like prototype apps and tinkering with building workflows. I

would really push people towards building apps that you're actually using every single day because I think only through that usage are you actually getting the value. Like if you build a

prototype app that isn't helping you get more done, then the the AI isn't really adding value to your to your day.

And there's only so much you learn from that when it's like, okay, I just one-shoted something. Oh, that's cool.

one-shoted something. Oh, that's cool.

And then you never come back to it. Like

you're not learning a lot and you're not getting like much leverage from it and actual leverage. Yeah, that's such a good point.

I also think there's a lot of people who spend a lot of time like customizing their workflow. So there's like I think

their workflow. So there's like I think there's like two ends of the spectrum.

One is like people who never customize or never build automations, but there's like this polar opposite end of people who like obsess around customizing their

tool like adding a ton of skills and MCPs and um these like workflow improvements and I think sometimes that can even distract from your core goal of

like launching some product or building some feature. I think there's a lot of

some feature. I think there's a lot of fun in customizing and we definitely want to make our products very hackable so that you you can make it work really

well for you, but there is a limit to how much it's useful. Um, and I think there there's a camp of people who maybe spend so much time customizing that they're like not sleeping and not doing

the like core task that they originally set out to do.

I see a lot of that on Twitter just like look at my setup. It's out of control.

It's so optimized. Then what are you what what are you actually building? No,

but my setup is so awesome. Like it gets so much done.

I think the simple setups actually work better.

Sl powerup getting take level up a little bit.

Yeah. Yeah.

There's this Karpathy tweet that just uh came out yesterday where he talked about this divide that's interesting between people that tried chatbt claw back in the day. it was like okay and they're

the day. it was like okay and they're like nah this is this is terrible and they kind of gave up on like what AI could do for them and they're just like so cynical of like no way it's not actually that big of a deal and then

there's people that are using it to code essentially who see the full intense power of it and how good it is and people on both sides don't understand

the other side and why they like how much they how they see the world and so your advice is really good here just like actually use it for real things and see how good it actually has gotten

yeah I think The big shift is that the 2024 generation of products were chatbased and the quad code generation of products is action-based.

And the like big aha moment people have is when quad can just like do things on your behalf. It is it is an amazing

your behalf. It is it is an amazing feeling to know that the agent is capable of doing so much more than telling you what to do. Like the agent can actually just do it itself. And when

people feel that, I I think that's the eye opening moment.

Shout out uh Chrome extension, the cloud called Chrome extension, which you can just watch it doing stuff and you'd be like, "Fill out this form for me and like, all right, here I go."

Exactly.

Okay. Uh anything else before we get to our very exciting lightning round?

No, let's do it.

Let's do it. Uh Kat, I've got five questions for you. Welcome to the lightning round. There's this animation

lightning round. There's this animation that place. I have to make sure to say

that place. I have to make sure to say it. Uh are you ready?

it. Uh are you ready?

I'm ready.

First question, what are two or three books that you find yourself recommending most to other people?

I really like how Asia works. Um, it's a story about economic development and what are like the policies and uh

governments that make um long lasting successful economies. The other books

successful economies. The other books that I'm really into are the technology trap. So, this is actually about the

trap. So, this is actually about the past few technology revolutions. So the

industrial revolution and the computer revolution and how this has affected uh workers. The the reason that I really

workers. The the reason that I really like this is because I think we there's a lot we can learn from history to make sure that this transition goes well. And

um maybe on like a fun note, I really like paper menagerie. Um it's just like a book of short stories about like

coming of age and AI and um just like self-discovery. Favorite recent movie or

self-discovery. Favorite recent movie or TV show you have really enjoyed?

I really like Drive to Survive.

There's no like deeper meaning to it. I

just there's just something very satisfying about people being so obsessed with like a singular engineering goal and just

like the purity of their pursuit. Um,

and I also really love Free Solo, which is about Alex Honold um, climbing El Capetan without a harness. And I think

similarly, it's just such a pure achievement to be able to climb this extremely challenging, dangerous route and to be able to have the mental focus

to do it knowing that if you make a single mistake, you die.

It's insane. Yeah, that movie is out of control. And it's interesting how these

control. And it's interesting how these relate in some way to the work you do.

I actually am a rock climber. Um I first watched Free Solo before I climbed rocks and so I thought it was impressive. I

didn't understand how impressive it was.

It's one of the rare movies where like the more you know about it, the more you're you're blown away by how insane this is. Like the kinds the kinds of

this is. Like the kinds the kinds of moves he's doing on the wall are things that like I don't think I will ever be able to do in my lifetime if it were set in a gym like one ft off the ground with a rope.

With a rope.

Did you see the documentary on that other guy, the younger one that went on like ice mountain?

I did. That one was very sad.

But that was that was wild. Okay. Uh

favorite product you recently discovered that you really love? The product that is like most changed my life outside of cloud products is probably Whimo.

Like I'm a diehard Whimo user. Um use it twice a day, get to and from work. So

the two things that I really like about it are one, I don't feel bad if a Whimo is waiting for me. And so I feel like I feel less pressure to be right at the

curbside the moment it arrives. And the

second thing is I feel like it lets me be a bit more productive. Um, when when I'm in the car with another human, I I typically try not to like do any work calls. I I feel a little rude if I'm

calls. I I feel a little rude if I'm like on my laptop the whole time. But

one thing I really appreciate about the Whimo is I can call into a work call.

I'm not worried about someone overhearing me. I'm not worried about,

overhearing me. I'm not worried about, hey, is this like rude? Am I talking too loud? Do I need to tell ask someone to

loud? Do I need to tell ask someone to like change the music? And so this has been like I feel like this has given me back like 30 minutes every day.

All these second order effects of of technology. It's so interesting.

technology. It's so interesting.

Yeah. I always thought Whimo needed to be priced lower than Uber and Lyft to succeed, but actually I'm like very happy to pay a 2x premium for it.

I love Whimo. It's just like like once you see it, you're just like, "Wow, this is insane." And and then you get used to

is insane." And and then you get used to it. Like you get in there, you're like,

it. Like you get in there, you're like, "This is crazy." And then you forget about it.

Totally. And I think it's also changed the vernacular. Like a lot of people at

the vernacular. Like a lot of people at Anthropic love Whimo. And I think in the past you would be like, "Hey, like let's call like blah blah ride share app." And

now like everyone's just like, "Okay, is the way here?"

Okay, two more questions. Do you have a favorite life motto that you often come back to in work or in life?

Just do things.

That's right.

I think there's a lot of value in like first principles thinking and if if you like if you know what you're optimizing for and you have like strong first principles, then you can normally deduce what the right like course of action is

and be able to clearly articulate that to all the stakeholders and then you should just like do it. Like I think jobs are fake. If you understand the constraints, you can figure out what you

can do and then just like try to do it quickly, learn from the mistakes and apologize or fix them if you did something wrong.

You you could just do things whoever said that.

I think it's liberating actually to like tell people this. I think in a lot of companies like roles are very strictly defined like okay this is what the PM does, this is what the designer does,

this is what engineer does and then even team scopes are very rigidly defined.

So, hey, like this corner of the codebase we touch and this corner like we're not allowed to touch. And I think what just do things lets people do is they feel like empowered to make these decisions, empowered to operate across

team boundaries just to like get something done.

That feels like a big important skill to be good at. People call it agency. Just

like do the things bias towards action. All these ways of describing just like you wait for permission.

Yeah. I think this is my favorite reason to work at a startup at some point in your life because like one thing that was like very life-changing for me was actually working at scale when we were 20 people. And so there was just no

20 people. And so there was just no process and we had like really big problems that we needed to solve. And it

it was like I really appreciate Alex and the rest of the team for like empowering me and the rest of the team to just like figure things out without any boundaries for what sales supposed to do, what off supposed to do, what engineer is

supposed to do. just like you have all the tools at your disposal. You have

some like ambitious hairy problem statement and you can do whatever you need to like get to a good solution.

Like you almost need that experience to build that skill to feel comfortable doing that because a lot of people, you know, they go through school or in college and all these like do the thing we tell you to do and then you will get a good grade. And you have to kind of

unlearn that of like, okay, I'm just going to do the thing that needs to be done and even if people think it's dumb, I think it's the right thing to do.

Yeah. Exactly.

Okay. Okay, I actually have two more quick questions. Two more final

quick questions. Two more final questions. One is uh when Claude thinks,

questions. One is uh when Claude thinks, there's all these I don't know if you call them verbs. What's the term for these things?

Uh thinking words.

Thinking words. And interestingly, these all leaked in the source code. Uh is it Do you have a favorite thinking word?

I really like manifesting. It's also

like the sticker that I I have on my favorite.

Clearly the winner. Okay, final

question. Asked Boris this too. with AGI

potentially arriving in our lifetime when you don't potentially have to work, what are you going to do? What are you going to do with all your time?

I think it it will take a long time for AGI to diffuse across society. So, I

think the immediate thing is actually just like helping bring the world along.

I think my like non-serious answer for after this happens is I'll probably just do a lot of rock climbing. I'll probably

just like live in some I'll probably move to like Fountain Blue and just like live amongst 10,000 boulders and climb for a bit. There's also so many books I

want to read that my my goal is to be able to read one or two books a week and I'm currently at probably like 0.5.

The backlog is pretty big. I think

there's just like so much we can learn from history and so much that I don't understand as well as I would love to.

Like I don't know anything about physics and or like robotics or like any hardware or like aerospace or there's just so many interesting topics. So I

I'm excited to learn even even knowing that the AI will already know it.

Cat, this was amazing. You're awesome.

Two follow questions. Where can folks find you online if they want to reach out and just follow what you're up to?

And how can listeners be useful to you?

The best way to reach out is I am Catwoo on Twitter. Um, feel free to like tag me

on Twitter. Um, feel free to like tag me in things. Feel free to DM me. I read

in things. Feel free to DM me. I read

all all my DMs. I don't always respond to every single one, but I will read them all. And then the thing that is

them all. And then the thing that is most helpful is tell us where cloud code and co-work aren't working well for you.

We we are very grateful for the amount of positive feedback. But the things that we thrive on is edge cases, errors,

like specific tasks that we can reproduce where quad code or co-work fail. Because if you're able to share

fail. Because if you're able to share that with us and we're able to reproduce it, then this is something that we're able to actively improve for our next generations of models and uh for our next harnesses.

Extremely cool. everyone on people on Twitter are not shy with sharing this feedback. So, so keep it coming.

feedback. So, so keep it coming.

Share us share, please, please share the problems that you're having with us.

Yeah. And it's really cool to see all you your team being on so active on Twitter and responding to people and so so like what I'm hearing like this is actually stuff you guys actually see and react to. So

react to. So yeah, we appreciate everyone being so engaged with us. Um it gives the team a ton of energy. We we have this channel of like user love and so whenever you guys share a success story we post it

there and whenever you guys share like issues with our product we put it into our feedback channel. That way our broader team is able to act on it.

That is so cool to know. Thanks for

sharing that. Well C, thank you so much for being here.

Thanks for having me.

Bye everyone.

Thank you so much for listening. If you

found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also,

please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You

can find all past episodes or learn more about the show at lennispodcast.com.

See you in the next episode.

Loading...

Loading video analysis...