LongCut logo

OpenAI’s head of platform engineering on the next 12-24 months of AI | Sherwin Wu

By Lenny's Podcast

Summary

Topics Covered

  • Engineers Evolve into Agent Managers
  • Software Engineering Becomes Wizardry
  • One-Person Startups Spark B2B Boom
  • Build for Future Model Capabilities

Full Transcript

95% of engineers use Codex. 100% of our PRs are reviewed by Codex for engineers.

I don't know what job has changed more in the past couple years. Engineers are

becoming tech leads. They're managing

fleets and fleets of agents. It

literally feels like we're wizards casting all these spells. And these

spells are kind of like going out and doing things for you. What do you think people aren't pricing in yet? The second

or third order effects of the oneperson billion dollar startup to enable a one person billion dollar startup. There

might be a hundred other small startups building bespoke software. And so I think we might actually enter into a golden age of B2B SAS.

>> I've been hearing more and more there's this stress people feel when their agents aren't working. There's a team that's actually doing an experiment right now within OpenAI where they are maintaining a 100% codeex written codebase. They run into the exact

codebase. They run into the exact problems that you're describing. And so

usually you're like all right I'll roll up my sleeves and figure it out. Team

doesn't have that escape hatch.

>> You've shared that listening to customers not always the right strategy in AI. The field and the models

in AI. The field and the models themselves are just changing so so quickly. They tend to like disrupt

quickly. They tend to like disrupt themselves. The models will eat your

themselves. The models will eat your scaffolding for breakfast. What's your

advice to folks that are like, "Okay, I don't want to miss the boat." Make sure you're building for where the models are going and not where they are today.

There's a quote from Kevin Whale, our VP of science here. He likes saying, "This is the worst the models will ever be."

Today, my guest is Sherwin Woo, head of engineering for OpenAI's API and developer platform. Considering that

developer platform. Considering that essentially every AI startup integrates with OpenAI's APIs, Sherwin has an incredibly unique and broad view into what is going on and where things are

heading. Let's get into it after a short

heading. Let's get into it after a short word from our wonderful sponsors.

Today's episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To

thrive in the AI era, organizations need to adapt quickly. But many organization leaders struggle to answer pressing questions like which tools are working?

How are they being used? What's actually

driving value? DX provides the data and insights that leaders need to navigate this shift. With DX, companies like

this shift. With DX, companies like Dropbox Booking.com Adion and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having

on engineering productivity. To learn

more, visit DX's website at getdx.com/lenny.

getdx.com/lenny.

That's getdx.com/lenny.

Applications break in all kinds of ways.

Crashes, slowdowns, regressions, and the stuff that you only see once real users show up. Sentry catches it all. See what

show up. Sentry catches it all. See what

happened where, and why, down to the commit that introduced the error, the developer who shipped it, and the exact line of code all in one connected view.

I've definitely tried the five tabs and Slack thread approach to debugging. This

is better. Sentry shows you how the request moved, what ran, what slowed down, and what users saw. Seir, Sentry's

AI debugging agent, takes it from there.

It uses all of that Sentry context to tell you the root cause, suggest a fix, and even opens a PR for you. It also

reviews your PR and flags any breaking changes with fixes ready to go. Try

Sentry and SER for free at centry.io/lenny

centry.io/lenny and use code Lenny for $100 in Sentry credits. That's s nt r y.io/lenny.

credits. That's s nt r y.io/lenny.

Sherwin, thank you so much for being here and welcome to the podcast. Thank

you. Thank you for having me.

>> I want to start with what's feeling like a barometer of progress in AI, especially in engineering. What

percentage of your code, if you even write code anymore, and your team's code is written by AI at this point?

>> I do write code occasionally now still.

Uh and I actually say for managers like myself, it's way easier to use these AI tools uh than to manually code at this point. And so I know for myself and some

point. And so I know for myself and some of the other emuring managers at OpenAI, uh all of our code is written by by codeex uh at this point. But more

broadly, there's just been this there's just so much energy. There's like a tangible energy internally around just how far these tools have gotten, how good Codeex as a tool has gotten for us.

And uh it's it's a little hard for us to exactly measure how much of the code is is written because the vast majority of it I'd say like close to 100% is is usually generated by AI first. What we

do track though is is you know at this point the vast majority of engineers use codeex on a daily basis. So 95%

of engineers um use codeex. Um 100% of our PRs are reviewed by codeex daily as well. So basically any code that goes

well. So basically any code that goes into production that's merged in Kodas kind of has its eyes on and uh suggests improvements suggests changes uh uh in the PRs. And so uh that's kind of what

the PRs. And so uh that's kind of what we're seeing internally. But by and large the most exciting is just the energy that that there that there is. Um

another observation that we've had is uh engineers who tend to use codecs more uh open way more PRs. So uh they're actually opening 70% more PRs uh and uh than than the engineers who aren't using

codecs as much. Uh and the gap is widening. So I feel like you know the

widening. So I feel like you know the people who are opening more PRs um are starting to you know learn how to use the tool more and more get more efficient and that 70% gap keeps uh uh growing over time and so might have actually increased since I last looked

at the at the number.

>> Okay. So just to make sure we hear what you're saying you're saying all of the code of these 95% uh engineers at at OpenAI is written by AI. It's written and then they review

AI. It's written and then they review it.

>> Yep. Yep.

>> It's It's like crazy that that's almost like not crazy anymore that we're just like getting used to this. I think

there's still some getting used to to be clear. Uh there's also I think some you

clear. Uh there's also I think some you know uh engineers who I think trust uh codeex a little bit less but um basically every day I talk to someone who who uh is blown away by something

that I can do and and kind of like the their bar of of trust kind of uh or like how much they trust the model to do on its own goes up over and over uh over time. And there's a quote from Kevin

time. And there's a quote from Kevin Whale our our um VP of of science here and he likes saying this is the worst the models will ever be. And so this is the worst that the models ever be for software engineering as well. And so

over time you just see people trusting it more and more and then we'll see the models get better and better as well.

>> Yeah. Kevin Wheel, former podcast guest, uh he he said exactly that line on this podcast and a few times.

>> Yeah. Uh Peter the Claudebot/moldbotclaw is what it's called now. uh developer uh recently shared that he uses codecs for his work and he feels like anytime it

does things he just trusts that it has done the right job and he's just like almost certain he could just commit it to master and it'll be great.

>> Yeah. Yeah. He's a great um user of codeex. I know he's in close touch with

codeex. I know he's in close touch with the team gives us great feedback. Um not

surprised that he uses it. I mean uh sorry it's called open claw.

>> Open claw. Yeah.

>> Open claw is a great is a great product.

And then I saw that this I mean this is very recent but this morning I think mold's book uh kind of like uh uh was shared as well and seeing all the uh AI agents talk to each other is pretty uh pretty surreal. It's basically her is

pretty surreal. It's basically her is happening in real life is what I'm hearing.

>> Yeah. Yeah.

>> So just like coming back to this crazy moment we are living through four engineers in particular. We've gone from you write every line of code to now AI is writing all of your code. I don't

know what job has changed more in the past couple years like job that we didn't expect to change this much where just like the job of an engineer is so different in the entire lifespan of an

engineer like in the past couple years it's now shifted to I don't write any more code how do you imagine the role of an engineer and the job of a software engineer looks in the next couple years just like what is that job yeah it's I

mean it's honestly being really cool to see um uh and it's part of where the excitement is because uh like the job is likely going to change pretty significantly over the next one to two years. It kind of feels like we're still

years. It kind of feels like we're still figuring things out though and so there's like this excitement I know especially from some of the software engineers of like we're in this rare moment you know maybe over the next 12 to 24 months where we'll kind of get to

figure things out ourselves and set our standards for ourselves in terms of where I see uh I see this moving. So I

think there's a common thing that everyone's saying which is uh you know people are generally like IC engineers are becoming tech leads. They're

basically like managers now. They're

managing fleets and fleets of agents. Um

I know many of the engineers on my team basically have like 10 to 20 uh threads kind of being pulled on at the same time. Obviously not active running

time. Obviously not active running codeex uh jobs but uh just a lot of parallel threads. They're checking in on

parallel threads. They're checking in on what they're doing. They're steering the agents and codeex and and and and giving it feedback. And so their job has kind

it feedback. And so their job has kind of really changed from just writing the code itself into being almost like a manager. In terms of where I think this

manager. In terms of where I think this will go one to two years from now. So

one uh kind of metaphor that that I kind of always come back to here is actually from this uh is from this uh programming textbook uh that I read back in college called sikp. I don't know if you've

called sikp. I don't know if you've heard of it. Uh structure and uh interpretation of computer programs. So si sicp.

um at at MIT it was really popular and and it was actually used as the uh uh introductory it was the textbook for the intro programming course for a very long time um and it kind of has this cult

following um it teaches you programming uh it teaches you a dialect of list called scheme uh and so it like introduces you to like functional programming it's like very mindopening in that way but the thing that was

memorable for me about that book so I I I kind of read it in college um the very beginning of it kind of describes programming as a discipline and draws this metaphor to basically like sorcery.

Like it says like software engineers are like wizards and you're like like programming languages are like incantations and you're like you know you're you're saying you're issuing these spells and these spells are kind of like going out and doing things for

you and and the challenge is like what incantation do you have to say to make the the program do what you want. And

this book was written in 1980 so this is this is a while ago and I think that metaphor is actually like kind of persisted over time. And I think it's actually playing out as we move into this uh new era of vibe coding or just

like what software engineering will look like because programming languages were basically these incantations. They've

changed over time and the challenge has always and and the trend has been that these it's been easier and easier to kind of get them the the the computer to do what you want uh via programming. And

I think the current wave of AI is is probably the next stage of that evolution. it is now literally

evolution. it is now literally incantations because you can tell you know your uh you can tell codex you can tell cursor uh exactly what you want to do and then it'll all go do it for you.

Uh, and I particularly like the wizard and like the the the sorcery analogy because uh I think our current state is is starting to move towards kind of like the the sorcerers apprentice uh you know from Fantasia uh where Mickey Mouse is

like you know he finds the sorcerer hat and he tries to do all these things and I actually think it's a really apt analogy because one uh it's just it's really powerful now these incantations you can do can is is extremely high

leverage but you kind of have to know what you're doing right like in Sorcerers Apprentice the whole plot is like Mickey goes wild the the brooms like go crazy and everything's flooding.

I think he literally sets the like sets the uh the brooms off on a task and then goes asleep. Uh and and so, you know,

goes asleep. Uh and and so, you know, it's like vi coding at it at its at its greatest and then eventually the the old sorcerer comes back and like cleans everything up. And um you know when I

everything up. And um you know when I see engineers kind of like doing these these these these 20 different uh codeex threads at a time there there is some skill and there's some seniority and like you know uh um a lot of thought

that needs to go into this because you want to make sure that the the the models aren't going off the rails. Uh

you definitely don't want to just like completely uh go away and and you know like ignore ignore the thing. But it's

also extremely high leverage like you know a a very senior engineer who's who's really prol uh proficient with these tools uh can now just do way more things via what they're doing. And and I think this is also what makes it fun.

Like it literally feels like we're wizards now. You know, it feels like

wizards now. You know, it feels like we're closer to to to to having uh uh to to making making it feel like this like magical experience where we're, you know, casting all these spells and having software do all these things for you.

>> I was thinking of the sorcerers apprentice exactly as the metaphor as you were describing that. So I'm glad you went there. Uh a previous podcast guest described it as you have a genie that you can that grants you wishes and

it's a useful frame because you have to be very clear about the wish you want like if you want to be big like how big it could be.

>> Yeah. Or it might be like the monkeykey's paw type thing where you know it's like you got what you want but what are the side effects?

>> Um yeah. Yeah. I think that and the analogy is great and um yeah the crazy thing for me is just the staying power of that book. Sik be like it's called the wizard book. you know, people call it the wizard book because that is the metaphor that they kind of weave

throughout the the book. And um we're we've basically reached that point now, which is which is which is really cool.

There's two kind of threads I want to follow here. One is I've been hearing

follow here. One is I've been hearing more and more there's this like stress that people feel when their agents aren't working. You fire off all these,

aren't working. You fire off all these, you know, codeex agents and then you have to keep stay on top of them. Oh

one's not working. I'm wasting

time. Uh do you do you feel that? Do you

feel that across your team at all?

>> Yeah. Yeah. I mean, it happens all the time. And I actually think like this is

time. And I actually think like this is where the interesting part of all of this lies right now because these models aren't perfect. These tools aren't

aren't perfect. These tools aren't perfect and we're still trying to figure out how to best interact with these uh with with with codecs or with these AI agents to to get work done. We see this come up all the time. There's a

particularly interesting team that we have internally. So there's a team that

have internally. So there's a team that that's actually doing an experiment right now uh within OpenAI where they are basically maintaining a 100% codeex written codebase. Uh so you know like

written codebase. Uh so you know like you know uh uh some you know you'll have the AI write code but you'll obviously end up like rewriting a lot of it and and you might need to like double check and change things but this team is just

fully codeex pill and just like leaning in entirely. Uh and they run into the

in entirely. Uh and they run into the exact problems that you're describing which is like you know their challenge is you know uh you know I want to get this thing this feature built but I can't get the agent to do it. And so

usually there's an escape hatch where you know then you're like all right I'll roll up my sleeves and like figure it out and then instead of using codeex I might use like tab complete and and cursor and things like that but this team uh uh for the experiment this team

doesn't have that escape hatch. Uh and

so then the challenge like how do I get the the the agent to to to do this? And

um I actually think we're going to be publishing a blog post from some of our learnings here. Um but a lot of

learnings here. Um but a lot of fascinating like paradigms and best practices are falling out of this. Um,

one interesting thing that we've noticed, I I don't know if this is what you you kind of feel, but we definitely feel it here, is a lot of the time, uh, when the coding agent is not doing what you want, it's usually a problem with

context and just like information that you've given it. It's just you've either underspecified or there's just not enough information around how to do something available to the agent, available to codeex. Uh and so uh when

when you have to solve it through through that uh the challenge is then to to to add documentation and actually work around this this limitation and basically encode more tribal knowledge

that's in your head somehow into the codebase either via you know code comments itself or code structure itself or via text files like you know MD files skills any type of additional resources

within the repository so that the model can um uh can better do its task.

There's a whole bunch of other learnings from this uh this group which I think is fascinating uh to to explore. But yeah,

kind of giving removing that escape hatch of of no longer using AI has allowed them to start piecing together a lot of the problems that we'll have to solve if we really want to lean into agents.

>> Another uh issue people run into, you talked about how people are shipping PRs like crazy, a lot more PRs if they're working with AI. Uh obviously code review is becoming a bigger challenge.

Is there anything you've figured out in your team to help speed that up to make that scale as and not just create this terrible job for people where they're just sitting there reviewing PRs all day?

>> Yeah, I mean one thing is Codex reviews 100% of all of our PRs at this point.

And so uh I actually think so one one really interesting thing that's happened is the things that tend to we hand we tend to hand to the models immediately tend to be the things that annoy us or like are the most boring parts of uh

software engineering. It's also why it's

software engineering. It's also why it's more fun now because we get to do more, you know, more of the fun things. Um,

for me, um, speaking more for myself, I really hated code reviews. It was like one of the worst things for me. And then

I remember in my first job, uh, out of college, uh, it was at it was at Quora.

Um, I owned I was working on the newsfeed and so I owned the code for the newsfeed. And so I was a reviewer for

newsfeed. And so I was a reviewer for Newsfeed and uh it was just like the central piece of code that everyone would touch. And so I would just every

would touch. And so I would just every morning I'd log in and be like like 20 to 30 code reviews. I just like oh my goodness I got to like you know get through all these. Um I would procrastinate and then it grows to like 50. And so there's like a a lot of code

50. And so there's like a a lot of code reviews. Codeex is really good at

reviews. Codeex is really good at reviewing code. Uh so actually one thing

reviewing code. Uh so actually one thing that we've noticed that 52 in particular has gotten extremely strongly adept at is reviewing code and especially when you kind of steer it in the right direction. And so uh for code reviews

direction. And so uh for code reviews yeah we create a lot of PRs but Codex reviews all of them and it makes you know code reviews go from a you know I don't know 10 15 minute task to sometimes even just like a two to three

minute task because you have a uh a bunch of suggestions uh already already baked in. Uh a lot of the times people

baked in. Uh a lot of the times people will uh especially for small PRs like you you actually don't even need people to review. We kind of trust codeex in

to review. We kind of trust codeex in this way. Um the original author kind of

this way. Um the original author kind of looks at Codex. It is you know the benefit of code review is to have a second pair of eyes to make sure that you're not doing anything dumb. Codex is

a pretty smart second pair of eyes at this point and so that's something that that we've heavily leaned into. Um the

general CI process and like the post uh kind of push and like deployment process has also been heavily automated via codeex internally at this point. If you

talk to a lot of engineers the thing that annoys them the most is after you've written your beautiful code like how do you get it into production? You

know you got to you got to run through all these tests you got to like you know lint errors you code review. Um there's

a lot of automated stuff you can do with codecs and so we've actually built some tools internally that that help automate that process, automate the lint, you know, if there's like a lint error, it's a very easy codeex fix. Uh and then just it could just patch it and then kind of

restart the CI process. Um so all of that is we're trying to collapse as as into as as little work for an engineer as possible which and and the byproduct of which is uh um uh they can they can

now merge and push out a lot more peers.

Codec's writing the code, Codex reviewing its own code. I'm curious if you are open to using other models to review your models work. Is that is that a path or is it just it's good enough, we don't need anything else?

>> So, I will say there's there's definitely a circular thing here and like going back to sources apprentice like you want to make sure you're not letting the brooms go crazy here. Um,

and so, you know, we're very thoughtful, I'd say, around which PRs kind of are completely just codeex uh reviewed. Most

people still obviously take a look at their PRs. Uh, and so it's not like it's

their PRs. Uh, and so it's not like it's going to zero. It's more like going from you know 100% attention to like 30% attention which which just helps things push through. Uh in terms of like

push through. Uh in terms of like multiple models uh so we we obviously test a lot of models internally and so we have a lot of those. Um we use external models less. Um it's we we think it's important to kind of dog food

our own models and kind of like get feedback there. But uh you can also you

feedback there. But uh you can also you know there are a lot of like internal variants of models that you can use to give you a different perspective um here as well and and we found that to to work quite well.

>> Okay. So just to just to make sure we get like a barometer of today's world at OpenAI in terms of AI and code uh just so I understand and then I want to move

on to a different topic. Uh 100% of code across OpenAI is written by codeex at this point. Is that the way to frame it?

this point. Is that the way to frame it?

I wouldn't make the statement that 100% of code running in production today was is written by AI. Uh and and it's kind of hard to to to do attribution there.

But the like almost every engineer heavily uses codecs in all of their tasks at this point. And so I you know if I were to guesstimate like the vast majority of code at this point is it was

probably authored by incredible. Okay.

So there's a lot of talk and we've been talking about kind of the IC role the work of an IC engineer. There's less

talk about the changing role of a manager especially an engineering manager. How has your life as a manager

manager. How has your life as a manager changed with the rise of AI and just what do you where do you think managers what's the role of a manager in the future?

>> It's definitely changed less than an engineer. Uh there's no you know codeex

engineer. Uh there's no you know codeex for managers just uh just yet. However,

I use codeex quite a bit for for some of the um uh some of some of the like kind of more managery tasks that I do. I'd

say a couple things are are changing.

There like some trends. So I don't think it's changed that much yet. Um, but I see trends and I think if you play it out, you can kind of see where where a lot of this is going. One thing that that's becoming increasingly clear is

codeex really empowers like top performers to to get a lot like to be a lot more productive. And so it really like and I think this is maybe true for AI more broadly like across society

which is like the people who really lean in or like the people who have high agency or like will get get good at these tools will kind of supercharge themselves. Uh and so I'm kind of

themselves. Uh and so I'm kind of noticing this now as well, which is like the top performers kind of end up uh uh uh being a lot more a lot more productive. Uh and so you see a broader

productive. Uh and so you see a broader spread uh in in team productivity in this way. One so one thing that I've

this way. One so one thing that I've always done as as a management philosophy is to spend uh actually the majority of my time with top performers just like make sure they're unblocked, make sure they're happy, make sure you know they're they feel productive and

they feel heard. I think this is even more true uh in an AI world where you know your top firmers are going to just like really be shooting ahead uh using these tools. I think I think one example

these tools. I think I think one example is is that the team that's you know maintaining a 100% codec generated codebase like just letting them kind of rip and and and see what's happening there is something that's that's paid

dividends. So I think that that's kind

dividends. So I think that that's kind of one one trend that I'm seeing where where where um spending even more time with top performers for managers I think

is is likely going to um uh continue.

The other thing is I I so this is more uh an observation but my sense is with a lot of these AI tools available to managers. So le less like writing code

managers. So le less like writing code but just things like chat GBT with organizational knowledge like being able to do research and understanding organizational context a lot better.

Another good example is uh um we're doing performance reviews right now and it's actually really easy to use chat GBT with internal knowledge hooked up to GitHub and like our notion docs and Google docs to give it get a really good

sense of what this person has done over the last 12 12 uh months uh and writing a little you know deep research report for it. My sense is I think managers

for it. My sense is I think managers will be able to manage much larger teams in this world kind of like how you know like software engineers are managing 20 to 30 codeexes. Um my sense that these

tools will allow managers people managers to be higher leverage um and uh it will allow them to to to manage you know teams of way more than than the current best practice of I think it's like six to eight right for software

engineering. You kind of see this

engineering. You kind of see this applied to you know like uh the non uh engineering domains like support or uh operations where it's like you know

previously um uh where previously like the size of support team might be limited but like as you can pass off more things to agents you can actually do more work and also manage more people this way. I think the same thing might

this way. I think the same thing might happen for um people management as well, especially in tech companies. Um and

we're already seeing this. There's some

teams uh where there are EM managing you know quite a few people and they're doing it pretty adeptly because of some of these tools where they can get higher leverage and understand what their team's doing, understand organizational context a little bit better uh and

operate in that way. I love this advice that the way you described it is you've always leaned into top performers and spent more time with them, unblock them, make sure they're happy. The way Mark Andre and he was just on the podcast,

the way he phrased it is AI makes good people better and it makes great people exceptional.

>> Yeah. Yeah.

>> And what you're saying here is just just doing this more and more is probably the right move. Spending more time with the

right move. Spending more time with the best people on your team to unblock them, make sure they have everything they need.

>> Yeah. A very good example right now is uh there are I would say like a a group of engineers internally who are really codeex and are thinking through what the best practices are for interacting with

this model and that is just an extremely high lever thing for them to do and so just like as a manager I'm just like yeah go explore this you know uh whatever best practices come out of this you know we we have to share with the

org we'll we'll you know uh we'll we'll uh we do all these knowledge sharing sessions we'll we'll like share documents and like best practices everywhere So things like that just uh you know elevate everyone and uh and I I

view that as like you know another example of this trend um uh that um that we're seeing where the top performers really get exceptional.

>> People just like have a sense this is big. AI is changing so much. The world

big. AI is changing so much. The world

is changing. Uh it's going to be a huge deal. What do you think people aren't

deal. What do you think people aren't pricing in yet into what will change into where things are heading? Just like

what's an example of something you think are like okay we're not realizing this yet. So, one of my favorite kind of uh

yet. So, one of my favorite kind of uh uh like phrases or like things that have come out of this whole AI wave is is the idea of the one person billion dollar startup. I think I actually think Sam

startup. I think I actually think Sam may have ke or like uh Sam Sam may have been the first one to say it, but it's fascinating to think about, right? It's

like yeah, if you know if people are so high leverage, at some point there will likely be um a oneperson billion dollar startup. Um, and while I think that's

startup. Um, and while I think that's really really cool, I think people aren't really pricing in the second or third order effects of this. And and

really what you know because because what the one person billion dollar startup implies is that there's you know, one person can just have so much more agency and so much more leverage

using one of these tools um that it is just super easy for them to get everything done that they need to for for their business to, you know, ultimately create something that's a billion dollars. But I think there are a

billion dollars. But I think there are a couple other implications of this. One

of them is uh uh if it's easy for a person to create a one person bill or if it's possible for a person to create a one person billion dollar startup, it also means it's way easier for people to just create startups in general. Like I

actually think this will like one second order effect to this is I think there's going to be a huge like startup boom and like small like SMB style boom um where anyone can build software for anything,

right? like uh uh one uh you're kind of

right? like uh uh one uh you're kind of starting see starting to see this play out in the AI startup scene where software's became a lot more vertical oriented where like these verticals uh

like creating some AI tool for some vertical tends to work quite well because you know you really lean into uh that particular domain you like really understand the use case for it and so if

you play out AI there's no reason why you can't have like 100x more of these these startups uh and So I think I think one world that we might end up seeing happen is in order to enable a one

person billion dollar startup there might be like a hundred other small startups building bespoke software that works extremely well to support uh other types of you know small small oneperson

you know billion dollar startups and so I think we might actually end uh enter into a golden age of like B2B SAS uh and just like software and startups in general and so I think I think that's that's a really interesting trend to to

kind of see because as it's as it's as it gets easier and easier to build software, um as it's easier and easier to uh you know uh uh run a company um you might actually just end up seeing

way more of these these these startups.

So the way I I' I've been thinking about is like yeah there might be one uh one person billion dollar startup but there might be like a hundred you know uh hund00 million startups there might be

tens of thousands of $10 million startups and as an individual it's actually pretty great to have a $10 million business like that's like enough for you're set for life at that point and so you know we might really see see

an explosion in that way and and I feel like people aren't aren't really you know pressing that in. Um there's

another kind of like third order effect to this you know and again all of these like as you get to the further and further out predictions I think uh are there's a lot of uncertainty I think if we end up moving to this world where you

end up with these like kind of micro companies building software that works for one or two people uh who own the company and and and are working there um I think the startup ecosystem will change I think the VC ecosystem will

change you know might we might end up in uh in a world where there's just like a handful of big players that are offering platforms and supporting all of these startups. But, you know, the types of

startups. But, you know, the types of venture scale return startups that can really 100 or thousandx your your investment might actually end up shrinking if you end up having a bunch

of these, you know, smaller 10 to$50 million uh companies. Uh, which are not great for venture solid returns, but are great for the individuals, the high agency individuals who are now, you know, really leaning into AI to to to

build these businesses for themselves.

>> I love how many uh order like uh order effects we've been through. I want to hear the fourth order effect now.

Sherwin, I'm just joking. I I can't It's too fourth order is too too is too gigabrain for me. I can't I can't think that far ahead.

>> It's like inception where just everything gets slower every time you go deeper into something every layer. Uh,

okay. So, the billion-dollar startup I've been I think about this a lot because I I'm not going to be a billion dollar startup because what I'm doing is not venture scale in any way and not super high leverage, but just seeing how

many support tickets I get from just like the most ridiculous things. It's

hard for me to imagine one person like I'm bearish on this billion dollar startup. I just want to share this

startup. I just want to share this thought uh simply because of the support costs even if AI is helping you at a billion dollars just like unless your

ACVs are you know very high and you have very few customers it's just dealing with support and people are like you know like they can solve their own problems but they're like I'll email support ask about this thing just

dealing with that is hard to scale is in my experience so unless you have in my opinion unless you have a bunch of contractors which I don't know does that count as a single person company I feel like it's very difficult to scale a

billion dollar startup and not have someone helping you with at least the support work and AI I think will only take you so far. So I I I think that's true. Uh, and actually I think my view

true. Uh, and actually I think my view on it is is is slightly different, which is I think that your, you know, Lenny's podcast might end up becoming a billion-dollar startup. But um what I

billion-dollar startup. But um what I think might happen is uh instead of you kind of being the one person who has to

dispatch an AI to solve and fix those support tickets, I think what might end up happening is there might be a whole smattering of other startups that are building software and super and like

super tailored towards what you might need. And so, you know, uh there might

need. And so, you know, uh there might be like 10 or 20 startups that build support software for podcasts and newsletters and uh that might be a oneperson startup. Like it doesn't need

oneperson startup. Like it doesn't need to be a big one. And uh it's it's and you know they might be able to just code up this product very very easily. They

are able to kind of like build their own thing and because it's so tailored and unique and hopefully you know useful for you. It might be something that you

you. It might be something that you purchase um as the one person billion dollar startup.

>> I would buy that. I would buy that.

>> Yeah. there's like a question of like what you in-house and what you what you like kind of uh outsource and what I think might happen is because the cost of writing software and building products is is is is collapsing so much you might end up outsourcing a lot of

this and in doing so reducing the size of your company uh and so that's kind of the world that I think might end up happening again there's like high uncertainty in what might play out here but the end result still might be a one

like one person driving this like high high massive leveraged company that might actually reach a billion dollars >> I could see that I also think about Peter at Clawbots/ /moldbot/openclaw

of just like how barrageed he is right now by all these asks and emails and pings and DMs and PRs just like I'm curious to and he's not even making any money off this thing.

>> Um >> yeah, I can't imagine what it's like to be him right now. It's must be like absolutely insane. It it's probably like

absolutely insane. It it's probably like um uh you know like the the months after we launched Hatchvt the craziness that was >> as one as one man.

>> Uh he's coming out on the pod by the way in in a week.

>> Oh, that's exciting. Yeah. Uh maybe the fourth order effect is distribution becomes increasingly important because there are so many freaking things trying to get your attention. So people with an

audience and platform I think become more and more valuable which is good good stuff. Okay. Uh I wanted to come

good stuff. Okay. Uh I wanted to come back actually to your management stuff.

So I really loved your insight about spending more time with top performers has been really successful to you. Just

thinking about you as a manager of a team that is building the platform that powers basically the entire AI economy like every AI startup is building on

your API. Uh clearly you're doing a

your API. Uh clearly you're doing a great job. What other kind of core

great job. What other kind of core management lessons have you learned?

What do you find is really important and and and key to your success as a manager of engineers and just people?

>> Yeah. Um, I I think a lot of the lessons that I've learned here, I don't know how specific it is to the OpenA API or or some of our enterprise products in particular. I think my my management

particular. I think my my management philosophy has obviously changed over time, but I think it it's uh probably stayed the same more than it's changed uh over time. Uh, one of these principles is is kind of what I talked

to you about before, which is, you know, spending a lot of time with with top performers, like actually spending and like to be very concrete, like it's like more than 50% of your time with your top performers with maybe your top like 10%

uh performers and really really trying your best to empower them. The way that I think about it is um is is is kind of come back to this analogy of software engineer as as as a surgeon um which

comes from the the mythical man book.

So, it's actually it's funny. So I I pull it from the book, but in the book they actually described this world where um I think they were like predicting the future cuz cuz I think the book was written like in the 70s or something. Um

they said that software engineering might end up moving into a world where that software engineers are like surgeons or like in a surgery room there's like one person doing the work.

Um and you know there's one person like cutting or whatever and like doing all the surgery and everyone else in the room is there to just support them, right? as like the nurse and like the

right? as like the nurse and like the and the resident and the fellow and then the surgeon's like I need a scalpel and they give them scalpel and then uh they're like I need you know this tool and this machine and they'll bring it

over. Everyone's there to just like you

over. Everyone's there to just like you know support the one uh surgeon and so the the the myth mammoth actually predicted that that is kind of the direction that software engineers going to go. I don't think that's exactly

to go. I don't think that's exactly played out where like you know it's much more collaborative and like it's not only one person doing the work but I've always really liked that analogy and and and and uh that analogy is actually what

I strive to uh uh kind of like emulate in my own management philosophy which is um software engineering isn't really like surgery where it's not just one person doing work but the way in which I like treating the people on my team and

the way that I act as a manager is I want to uh empower them make them feel like they're a surgeon um and in in so far at like as like making sure that I'm supporting them and making sure they have everything that they need to to do

their work and it feels like they have an army of people kind of supporting them um and looking around corners and giving them everything that they need when it's really just me as the as the manager. And so like the example that I

manager. And so like the example that I give is is looking around corners and unblocking people especially from an organizational perspective is extremely extremely useful. And again going back

extremely useful. And again going back to the AI conversations even more important nowadays right like uh if if people are just like cranking PR after PR the main thing bottlenecking uh

progress and and you know shipping something tends to be organizational or like processoriented and if you as a manager can kind of look around corners and kind of unblock the team if you can you know like if if the surgeon needs

scalpel but you know the manager kind of already has a scalpel ready for them that that's the best case scenario.

That's kind of the the way that I approach uh u um management and and especially uh engineering management.

And so that's something that that's really really um stuck with me over time. And even though you know software

time. And even though you know software engineers aren't exactly surgeons, that metaphor has always kind of stayed in my mind as of as of uh uh for the rest of my career.

>> I love that. And I I feel like I wonder if that's something AI can help with is look around corners and predict here this engineer is going to be blocked by this decision. We need to figure this

this decision. We need to figure this out. We need to get

out. We need to get >> Yeah, that's actually a really good uh point. I haven't tried this yet, but I

point. I haven't tried this yet, but I wonder what would happen if I ask uh Chad GBT hooked up to company knowledge, you know, like what are the active blockers? Uh look through all the notion

blockers? Uh look through all the notion docs, what are maybe Slack messages, you know, it's probably in Slack somewhere.

What are the active blockers on my team and is there something I can do to to help? Um now very I have not thought

help? Um now very I have not thought about that, but you're right.

>> You just had an insight right here.

>> Yeah. Yeah. Yeah.

>> Uh and it's I think even more interestingly, what do you anticipate will be a blocker for this engineer or this team in the in the coming months or >> Yeah. You asked the you asked the model.

>> Yeah. You asked the you asked the model.

Well, you asked the AI to do the second and third order things. Anticipate that,

man. Anticipate what the bloggers will be next month, too. Uh,

>> I think we've got a we've got a good idea right here.

>> Yeah. Yeah.

>> This episode is brought to you by Data Dog, now home to EPO, the leading experimentation and feature flagging platform. Product managers at the

platform. Product managers at the world's best companies use Data Dog, the same platform their engineers rely on every day to connect product insights to product issues like bugs, UX friction,

and business impact. It starts with product analytics where PMs can watch replays, review funnels, dive into retention, and explore their growth metrics. Where other tools stop, data

metrics. Where other tools stop, data dog goes even further. It helps you actually diagnose the impact of funnel drop offs and bugs and UX friction. Once

you know where to focus, experiments prove what works. I saw this firsthand when I was at Airbnb, where our experimentation platform was critical for analyzing what worked and where things went wrong. And the same team

that built experimentation at Airbnb built EPO beta do then lets you go beyond the numbers with session replay.

Watch exactly how users interact with heat maps and scroll maps to truly understand their behavior. And all of this is powered by feature flags that are tied to realtime data so that you

can roll out safely, target precisely, and learn continuously. Data Dog is more than engineering metrics. It's where

great product teams learn faster, fix smarter, and ship with confidence.

Request a demo at dataq.com/lenny.

That's data dogq.com/lenny.

Okay, I'm going to shift to talking about the API and the platform that you all build. Some So, you work with a lot

all build. Some So, you work with a lot of companies implementing your API, your platform, building on on your on your tools. You told me that you find that a

tools. You told me that you find that a lot of companies actually have negative ROI on their AI deployments, which uh I think is what a lot of people read about

and feel and think and it's interesting you're actually seeing that. What what's

going on there? What are they doing wrong? What do you what what's happening

wrong? What do you what what's happening in the world of AI and deployments in ROI?

>> Yeah. So, so to be clear, I I I don't like explicitly see quantitative numbers around this. uh you know uh it's

around this. uh you know uh it's actually really hard to measure these things but especially from observing some companies kind of trying to do AI I would not be surprised if a lot of AI

deployments are actually you know negative ROI. I mean part of this too is

negative ROI. I mean part of this too is I think there's also general sentiment um from uh folks uh around the country u like basically outside of tech that AI is being forced onto them. Um, and I

think part of this is is is uh uh uh probably a symptom of some negative ROI uh AI deployments. A couple things I've observed around this. So one one thing

is and I think I I come back to this again and again like I think we in Silicon Valley just forget that we live in a bubble. Like we are so like Twitter is a bubble sorry X is a bubble. Um

Silicon Valley is a bubble. Software

engineering is a bubble. most people uh in the world, most people in the US are not software engineers, are not very AI pled um are not following every single model release. And so uh uh and so we're

model release. And so uh uh and so we're just like highly out of the loop on how to use this technology and so you know like we um we always talk about all these like best practices for codecs, all these like codeex build people

within OpenAI. I'm sure everyone on X

within OpenAI. I'm sure everyone on X who posts are like crazy power users of of these AI tools, you know, they they lean into skills, they lean into agents.mmd,

agents.mmd, >> MCPS.

>> Uh yes. Yeah. All all of that. And uh

when I talk to some of these companies and I and I talk to the the actual employees using these, it's like the most basic thing that they're trying to do and they like have very little understanding of exactly how this

technology works. And so that that's

technology works. And so that that's that's kind of like one big observation for me, which is like they're asking very simple questions of these things.

They're really not not pushing it just yet. And so that kind of goes back to

yet. And so that kind of goes back to that kind of ties into to to what I what I think um more companies do or like what could do or or what what a more ideal AI deployment setup looks like. Um

and and this is kind of how we've run things within OpenAI too. Um the

companies where I think it's it started to work really well have a combination of both top down buyin. So it's like the seuite it's like you know we're we're we want to become an AI AI first company.

So there's buyin, they buy the tools, they have, you know, exact support, but it also has bottoms up adoption and buyin. And so what I mean by that is it

buyin. And so what I mean by that is it has like actual employees doing the work who are really excited about the technology and are willing to learn, evangelize, build best practices and

kind of like knowledge share within the organization. We've we've seen this a

organization. We've we've seen this a lot internally. So like obviously OpenAI

lot internally. So like obviously OpenAI has always wanted to be uh a very AIcentric company but where when it really started taking off was when was with the introduction of codecs and these tools where like people like

actual employees themselves could start applying it to their work. Uh and I think you really need this because at the end of the day everyone's work is like very different. It's like very unique. Uh software engineering is

unique. Uh software engineering is different than finance is different than operations different than go to market and sales. Uh, and so there's like a lot

and sales. Uh, and so there's like a lot of these like last mile intricacies of work that needs to really be done in a bottoms up fashion. And so my sense is a

lot of these these AI deployments don't have like don't have bottoms up adoption. Like it was like an exact

adoption. Like it was like an exact mandate and it's extremely top down and is very divorced from what the actual work looks like. And as an end result, you end up with a giant workforce that doesn't really understand the

technology. is like, I know I'm supposed

technology. is like, I know I'm supposed to use this and maybe it's like on my performance review too, but um I'm not sure what to do. And they look around, no one else is doing it. There's no one else to learn from. Uh and so my my you

know my recommendation for companies kind of pushing this is is find or maybe even staff a full-time team internally that is this kind of tiger team internally that can um explore the full extent of the capabilities apply to

specific workflows do the knowledge sharing uh create excitement uh within folks uh who might want to use this technology uh because in the absence of that it's very difficult to it's actually very difficult to pick up

>> and who who would you put on this tiger team is it like engineerled do you find in your experience is it cross functional sort of team.

>> Yeah, it's it's interesting. So, um also a lot of companies don't have software engineers. Uh and so the the pattern

engineers. Uh and so the the pattern I've seen is it tends to be these like software engineering adjacent like basically technical people but are not software engineers. I think those are

software engineers. I think those are the ones who get tend to get most excited uh around this. It's like, you know, maybe the It's like maybe the like, you know, support team operations

lead who doesn't code but loves using these tools and, you know, is like an Excel wizard or something. And so it's like technical adjacent or like coding adjacent and like, you know, pretty technical. Those are the times like

technical. Those are the times like those are the kinds of people I've seen in these companies who just like really light up and get excited around this.

Um, and you can usually build a team uh a team around that. But yeah, it's like oftentimes not software engineers.

Software engineers, I think, will understand this, but not every company has has software engineers. Um, is

actually kind of a rarity. They're

they're hard to find. They're expensive.

Uh, and so it's it's these other other types of folks. What I'm hearing is the anti- pattern is top down. This is very the CEO found exec team just like we are going to go AI first. We're going to lead into AI. Everyone's going to be

judged on their performance using AI tools, how much your productivity is increasing thanks to AI. And without

with that being just top down and not creating a team that is bottom up spreading the the gospel, you find it doesn't work.

>> Yeah. Yeah. Exactly. Exactly.

>> And the advice is find the people that are most excited and instead of kind of having them spread out through the organization, you're what you find works is create a little AI kind of evangelist

team that finds ways to use it and kind of spreads it across the work.

>> Yeah. I mean another it's kind of like hearing you you play back to me. Another

way to think about it, kind of tying back to my own management philosophies, is find the high performers in AI adoption and empower them. You know, let them build hackathons, let them, you know, hold seminars, do knowledge

sharing, kind of create the seeds of uh of excitement internally. Okay. Amazing.

There's a couple hot takes I want to hear uh from you. Something that I've seen you talk about and share. one is um you've shared that talking to customers and listening to customers is not always

the right strategy in AI and it might often lead you astray.

>> I don't know if it's that hot of a take.

I think the main thing here is so obviously you should talk to your customers like it's it's like useful to talk to customers. I just think the AI field um especially what I've seen over

the last kind of like three years um uh working on the API and and seeing kind of all that evolve is the field and the models themselves are just changing so

so quickly they tend to like disrupt themselves especially around the like tooling and the scaffolding space. So uh

there there's this quote that I read actually earlier this week from a it's from an ex article uh by this guy named Nicholas who's who's the founder of a a startup called finol uh where uh I think he was he was sharing a lot of the best

practices that he has learned through building AI agents for financial services I think at a at a startup FinTool um and he had this phrase that I thought was really good which is uh the models will eat your scaffolding for

breakfast. Like if you look if you

breakfast. Like if you look if you rewind back to 2022 right when Chad GBT launched um these models were pretty raw and there was like all this product

scaffolding and and things especially in the developer space to basically try and steer the model and build a scaffolding around it to get it to do what you want like agent frameworks there's like like

vector stores I think was like really popular back then uh and just like a whole smattering of tools here and as you've kind of seen the field play out that the models have just changed so

much uh that uh and and gotten so much better that they ended up yeah literally eating some of some of the scaffolding.

Um and I think this is even true today.

So I think the the article from Nicholas um actually you know the the current scaffolding which is uh fashionable is skills files based context management. I

could see a world where at some point you know that's no longer useful uh where the model can actually you know manage all that themselves or like you know uh uh or or or there might be you know it's hard to predict but like might

move on to some new paradigm where you know need this file based like skills skills type thing you have literally seen this play out right like the agent frameworks I think are a little less useful now um there was a period of time like 2023 where we thought vector stores

and is is going to be like the main way for you to you know bring organizational context into the models and you need to you know uh vector and embed every bit of your corpuses and then you need to do

all this work to like figure out the vector search to like optimize that to fill out the right information the right time. All of that is scaffolding because

time. All of that is scaffolding because the model you know was not good enough and turns out you know in this case it turns out as the models get better a better approach is actually to take out

a lot of that logic and trust the model and give it a set of tools for search.

It doesn't need to be a vector store.

You could actually just hook it up to any type of search. It could literally be files on a file system like skills uh and agents MD uh to kind of steer it uh as well. Obviously, there's still a

as well. Obviously, there's still a place for vector stores. I know a lot of companies are still using it, but the that the entire scaffolding around that and building an entire ecosystem around that and assuming that's the only scaffolding that you need has has really

changed. And so tying this back to like

changed. And so tying this back to like you know uh it's you know you don't always have to listen to your customers because the field is changing so much at any point in time you know a lot of people are kind of in this local local

maximum and if you just blindly listen to your customers they'll be like yeah I want a better vector store like I want a better uh I want a better you know agent framework for this and uh if you had just kind of only chased down that path

it actually would have led you to you know build something that again is the local maxima whereas as the models get better we've had to reinvent event and kind of rethink the right right uh uh abstractions and the right tools and

frameworks to to to to build around these models. Um and the cool slash

these models. Um and the cool slash exciting slash kind of crazy annoying part is it's a moving target. And so

yeah, like the current current smattering of of tools and frameworks right now will likely need to evolve and change pretty significantly over time um as the models get smarter and better.

But that is just the nature of building in the space. I think that's what makes it exciting. Uh but it also means when

it exciting. Uh but it also means when you talk to customers, you kind of need to balance the exact feedback that they want uh with uh where you think the models are going and where you think things will uh trend over the next one to two years. It's interesting how this

is um the bitter lesson is uh you know this big lesson that AI and ML folks learned which is just like uh don't the less you over complicate the less logic

you add to to machine learning to AI the more it'll be able to scale and grow and just like take it all away and let it just just compute basically just give it more power to to get smarter on its own.

Yeah, there's literally a version of the bitter lesson applied to like building with AI where you know we were trying to architect all this stuff around and turns out the models are just kind of you know eat it all away and and and and

honestly like OpenAI API team has like been guilty of this uh where we kind of like took some you know left and right turns when we shouldn't have um but uh yeah the models still end up models get

better and uh we're all learning the bitter lesson day in and day out. So

what would be the the key takeaway for folks building on say the API or just building agents and you know having to build a little bit of this around for now is it just yeah what would be the advice?

>> My general advice and I've been giving this to people for a while and I think still true today is make sure you're building for where the models are going and not where they are today. um uh you know the the it's it's clearly a moving

target and I think a lot of the companies that I've seen startups that I've seen really really do well is they build a product for an ideal like type

of capability that is like maybe 80% of the way there today and it like they end up you know having a product that like kind of works but it's like just almost there but then as the models get better

you know suddenly it might click and then their product now is incredible because it works you like uh uh like maybe with like 03 at some point it suddenly works with 5.1 5.2 suddenly it unlocks it but they're building these

products with the like the model capability improvements in mind and with that you end up creating an exper experience that's way better than if you had assumed that it's it's static in the first place. Um and so that would be my

first place. Um and so that would be my my general uh advice which is you know build for where where where the models are going and not not where they are today. You end up building a better

today. You end up building a better product. you may need to, you know, like

product. you may need to, you know, like wait a little bit, but like, you know, the models are getting so much better so quickly, you often don't need to wait um that long.

>> So to follow that thread, where are like in the next six to 12 months, where is the API heading? Where's the platform heading? Where are the models heading?

heading? Where are the models heading?

As much as you can share, I know there's a lot of secrets here that maybe you're most excited about or do you think that people should start to prepare for and however much you can share?

>> I mean, so the obvious one is um how long of a task uh these models can do coherently. Um, so there's like the the

coherently. Um, so there's like the the meter benchmark that that I think tracks software engineering tasks and how long, you know, like how long of a task can these models do uh 50% of the time, 80%

of the time. Uh, I think we're at something like multi-hour tasks being able to be done by uh software engineering tasks being able to be done by um uh these frontier models uh 50% of

the time and then I think 80% is something like just under an hour. But

the the the sobering thing about that that chart is they plot all the uh previous models on this chart as well.

So you can really see the trend of this.

That's something that I'm really excited about which is you know I actually think products today really optimize for tasks that the model can do for like minutes at a time. Like even codecs and like the

coding tools I'd say like you know it's in the Cly you're kind of like seeing it be interactive. It's really you know

be interactive. It's really you know quite optimized well for like maybe at most 10-minute types. I have seen people push codeex to the limit and do like

multi-hour long uh tasks. Uh but again I I think that that's more of the exception. But I uh if you follow this

exception. But I uh if you follow this trend like I think like in the next 12 to 18 months we could see models that could do multi-hour long tasks very very coherently. At some point it might reach

coherently. At some point it might reach like you know 6 hours a dayong task where you kind of like dispatch it and have it do you know do things on uh on its own for a while. The types of products you build around that will look

very different. you want to give the

very different. you want to give the model feedback. You obviously don't want

model feedback. You obviously don't want it to completely run wild for a day.

Maybe you do, but but you probably don't. Um and and then the the universe

don't. Um and and then the the universe of things you can have the model do really expand. So that's something that

really expand. So that's something that I'm really um really excited about seeing. Another uh thing over the next

seeing. Another uh thing over the next 12 to 18 months I think be really cool is improvements in our in the multimodal models. So uh and and actually by by

models. So uh and and actually by by multimodality I'm mostly thinking about audio here where uh the models are pretty good at audio. I think they're going to get a lot better um at audio

over the next 6 to 12 months especially the like you know the um native multimodal models the speech to speech ones I think there's also interesting work uh being done around um new types

of models and architectures on the multimodal audio side uh as well but uh audio especially in the enterprise and in a business setting I think is a hugely underrated uh domain still like

everyone talks about coding it's all text uh but uh we're talking in audio A lot of the world's business is done via audio. Uh a lot of services and

via audio. Uh a lot of services and operations are done via uh talking and audio. And so uh I think that that area

audio. And so uh I think that that area is going to look very exciting in the next 12 to 18 months. And I think there will be uh even more unlock for uh what we can do uh with with audio models uh there as well.

>> Amazing. So quick summary uh expect agents and uh AI tools to run longer to that that trajectory to continue to increase and then audio and speech

becoming a bigger deal more first party and and native and better and core to the experience.

>> Yeah, >> extremely cool. Okay, I want to go back to one of your hot takes, another hot take that I've seen you discuss. You're

big uh you're very bullish on business process automation as an opportunity in the world of AI. Talk about that. Yeah,

this go this goes back to the thing that I said previously which is um we we we live in a bubble in Silicon Valley and um a lot of the work that we do that we're used to software engineering you

know product management building products uh is very differently shaped than the work that goes on um that runs our entire economy and I see this in and out when I talk to customers uh if you

if you talk to any like you know company that's not based in it's not a tech company um there's a lot of business processes. And so what what I mean by

processes. And so what what I mean by this is is you know I generally delineate it as you know there's like uh like software engineering is kind of like open-ended knowledge work right it's and this is why I think tools like

codeex tend to be quite quite good because it's exploring and and you're giving it these like open-ended things but software engineering is fundamentally like pretty open-ended uh and it's not very repeatable right so like you build a feature you're not

trying to build the exact same feature over and over again and a lot of like tech jobs are in the space I think like data science is kind of in the space as well. Even some of the like strategic

well. Even some of the like strategic finance stuff. But as you move further

finance stuff. But as you move further and further away from software engineering and like what what is core in tech, a lot of jobs are just business processes. They're like repeatable

processes. They're like repeatable things uh repeatable operations um that you know some manager at a company has kind of like iterated on. Um there's

usually a standard operating procedure that people want to do. Uh and you don't want to deviate from it that much. you

know this like in software engineering the ingenuity is isn't isn't deviating but a lot of a lot of the the the work being done in the world is actually just um running through these procedures and

operations like if I you know if I call um a support line they're running through one of these if I call my utility company there's a bunch of processes and things that they can and cannot do um for me uh and so I'm I'm

just extremely bullish on this general category of like and and and I think it's underrated because it's so different from what we think about in Silicon Valley people tend to not think about it. But how can we apply um AI uh

about it. But how can we apply um AI uh and and some of the tools and frameworks that we have towards this business process automation towards automated

automating and making easier um repeatable business processes with high determinism um that is fully integrated with business uh data and business decisions and and and different systems

within an enterprise um and how can we actually make that that process better uh because I actually think there's a lot of opportunity and a lot of work to be done uh in that area and we just we just don't talk about it because it's

it's a little bit less uh uh in our wheelhouse. So your take here just to

wheelhouse. So your take here just to make sure I fully understand it is you think there's a much uh bigger opportunity outside of engineering for AI to impact uh productivity of

companies and also jobs of these folks that are doing these kind of repetitive easily automated tasks impact jobs and also just impact how work is done like so much of work is done in this way like

you think about you know like what a basically we I I talk to customers all the time big enterprises like like how how will AI transform my company like how will it run in in in a world uh with

AI in like 20 years. Um and and you know, software engineering is part of the story, but there's so much more on the business process side. And I

actually think it might look even more different on the business process side.

And and the work there is is pretty substantial. It's actually interesting.

substantial. It's actually interesting.

I don't know like from an absolute percentage or absolute basis, I don't know if it's bigger or smaller than software engineering. Like software is

software engineering. Like software is pretty huge and pretty extensive uh as well. But it is pretty massive and it's

well. But it is pretty massive and it's definitely bigger than you know uh uh uh it's bigger than you would think it is based off of how how people talk about it or don't talk about it on X or

Twitter. Okay. Uh going in a slightly

Twitter. Okay. Uh going in a slightly different direction uh having built the platform building the API uh people building on the API the biggest question

on people's minds is always just uh how do I not have OpenAI squash my idea and build their own thing and then you know destroy this this market I created.

What's the general policy? What's the

general philosophy of how startups should think about where open AI is unlikely to go? My my general answer here is is um the market is so big and

so massive like I actually think you know startups should just not overly think about where open AI or these labs are going. I've talked to a lot of

are going. I've talked to a lot of startups you know that have you know not worked out, startups that are doing really well. Every startup that I've

really well. Every startup that I've seen that has kind of fizzled out is not because open AI or you know big lab or Google or something has has come to squash them. It's because they built

squash them. It's because they built something and it like really didn't resonate with with the customers.

Whereas the ones that take off like even in very competitive spaces like coding like cursor is huge at this point and it's because they built something that people really love. And so my general advice is like don't you know don't

overly stress about this. Just build

something that people like and you will you will have a space in this. I can't

overstate how big of an opportunity there is right now. Like the the the opportunity space of building with AI is so big. Like a good example of this is

so big. Like a good example of this is is like the space is so big that the overton window of what is acceptable and not acceptable for VCs to do has completely changed here. VCs are like investing in like competitive companies

left and right is just like the space is so big because the opportunity is is is is unlike anything that we've seen before. And while you know uh that that

before. And while you know uh that that affects how VCs operate from a startup perspective it's like the most empowering thing in the world because the like even if you just build something that that some people really really love you will you will end up

with a massive massively valuable business. Uh and so I that's why I tell

business. Uh and so I that's why I tell people like don't don't overthink about it. The other thing like I also think is

it. The other thing like I also think is important to remember uh at least from an open AI perspective. One thing that that that we've always held very near and dear which both Sam and Greg helped you know reinforce from the top as well

is we actually view ourselves fundamentally as a like ecosystem platform company. The API was our first

platform company. The API was our first product. We think it's really important

product. We think it's really important for us to foster this ecosystem and continue to you know uh support it and and not squash it. And so if you kind of look at the decisions we make it this is all we've weave through it. Every single

model we've released in one of our products gets released in the API. Like

even, you know, we release these codeex models now that are a little bit more optimized for the Codex harness, but they always find their way into the API and like all of our, you know, uh, customers end up using those. We don't

hold back on any of that. Uh, we think it's really important to keep our platform neutral and so, you know, we don't block competitors. Um, we allow people to have access to our models. Um

uh we also want you know like uh we've recently been testing more of like the signin with chatgbt you know uh product as well and so we we want to foster this ecosystem and I think it's really important that we do so. Uh the general

like thinking about this is like you know a rising tide like lifts all boats and you know we might be a aircraft carrier we're like pretty big at this point but we think it's important to raise the tide uh because everyone kind

of uh benefits and I think we'll benefit as well like our API itself has grown pretty significantly because we we act in this way and so I'd really encourage people not to view OpenAI as this kind of like you know thing that'll just uh

uh shove people out of the way but instead focus on on on building something valuable and we you know remain committed to to to providing an open ecosystem.

>> Why why is that important to open AAI?

Just this focus on building a platform, creating a way for people to build businesses just like is that just that's been the vision from the beginning. We

want this to be a a platform. It's been

the vision from the beginning. It comes

goes back to our charter actually like our our mission. Um so the the open air mission has always been to one to build AGI. So you know we're obviously doing

AGI. So you know we're obviously doing that but then the second thing is to like spread the benefits of it to all of humanity and there's kind of like a lot of you know uh uh the main part there is all of humanity like uh and obviously

Chad GBD is trying to do this you know we're trying to reach however many you know the whole world but very early on and this is why we we launched the API you know back in I think it was like 2020 or something like really early we

don't think we as a company will be able to reach all of humanity right like there's I don't know every every corner of the world is like like pretty pretty pretty pretty deep. And so we actually

feel like in order for us to fulfill our mission, we need to have some platform style thing here where we can empower other people to build, you know, the customer support bot for podcasters and

newsletter hosts. Uh because we're not

newsletter hosts. Uh because we're not going to be able to do it ourselves. Uh

and so we've largely seen this play out with the API. Uh this is why we we you know, we we we we talked to so many of our customers and and and really, you know, love seeing the diversity of of things built on. But yeah, it it's been

there since day one because it's it's kind of we view it as an expression of our mission.

>> And you haven't even mentioned the uh the app store that you guys are launching the chatbt app store.

>> Yeah. Is is that under your umbrella by the way or is that a different Oregon team?

>> It's a it's a different team. So it's

under chatbt. We obviously collaborate very closely with them and uh you know they built like an apps SDK uh which is built in close collaboration with our team. Uh but that is more within the

team. Uh but that is more within the chat GBT umbrella. Uh but that is also another like that's another example of this right. It's like Chad GBT is like

this right. It's like Chad GBT is like we we we we we kind of like have these 800 million weekly active users who are just coming over and over again. Like

it's a great asset to have as a business, but like man would it be better if we could somehow allow, you know, uh other companies to come in and and and and uh take advantage of this as well and and build for this this

audience as well. And and then ultimately we think it'll help us expand that that that group as well, right? And

so it's all it all kind of comes back to the mission and uh we find that being a platform being open tends to help here.

Just that number 800 million I think it's ma just like weekly weekly >> weekly act billion people using weekly just like it's absurd how many how these

numbers we're just used to now but that's in insane unprecedented >> yeah it's it's mindboggling for me to think about from a scale perspective uh honestly and the way I think about it is like 10% of the world uh and growing by

the way like it's just it's it's shooting up um >> uh come to chat GPT uh um and and use it every day or sorry every week.

>> And this point I just want to double down on this point you're making. Open

AAI's mission was to make AI available to all humanity. And I think some people diss that. They're like, oh, you know,

diss that. They're like, oh, you know, it costs money and it's like uh like the fact that it it's there's a free version of chat GPT that anybody can use that is not so different from the most powerful

AI model that exists in the world for free that's not gated that anyone could use. It's like if you have if you're a

use. It's like if you have if you're a billionaire, there's only so much more you can get out of AI than what someone you know in a village in Africa can can get. And I know that's always been

get. And I know that's always been really important to open AI.

>> Yeah. Yeah. I mean like uh that that's why I think we've leaned into the health work. We leaned into like like uh

work. We leaned into like like uh education is going to be a very interesting here. Um the other in insane

interesting here. Um the other in insane kind of trend here is is the free model has gotten so smart over time. like the

free model back in 2022 was, you know, like uh was good at the time, but it's like nothing compared to what you get today because you get 2 GB 5 today. Uh

and so the like, you know, raising the floor across the world is kind of, you know, something that we're really we're trying to do and and we view it as as part of our mission. The other flip side of this, by the way, is like, you know, kind of talking about like the

billionaires or whatever. I know people have saying like you're using the same iPhone that like you know Steve or sorry like Mark Zuckerberg's probably using or like the billionaires are using but for like $20 a month you're basically using

you know like using the same AI that you know the billionaires are using. Uh for

like $200 a month uh you get the same pro model that you know all the billionaires are using but they're probably not using pro for everything.

They're probably just using the the plus tier ones uh for their day in and day out. And so yeah, this kind of like

out. And so yeah, this kind of like democratization and just like spreading of this this benefit like across all of the world is something that's really meaningful to us and something that um

uh drives a lot of of of what we do. One

last question just for folks that are thinking about building on the API or just like oh wait I could do cool stuff with open models and APIs. What what

does your API and and platform allow people to do? Like I know you can build agents on top of the platform. Just talk

about what you allow. So fundamentally

the API offers a bunch of developer endpoints uh and and uh and these developer endpoints basically let you sample from our models. The most popular one that we have right now is one called

responses API. Uh and so this is an

responses API. Uh and so this is an endpoint and it's optimized for building longunning agents. So agents that'll

longunning agents. So agents that'll work for a while. So what you can basically you can at a very you know uh uh low level you're basically just giving the model text. The model will

work for a while. you can kind of, you know, pull it to see see what it'll do and then you'll get the model response back at at some point. That's like the lowest level primitive that we have uh for people and that's actually what a

lot of people use. That's the most popular way of building on top of API with that. It is like super

with that. It is like super unpopinionated and you can do basically whatever you want. It's like the lowest level thing. We've also started building

level thing. We've also started building more and more kind of like layers of abstraction on top to help people build uh some of these. Uh and so next layer up we have this thing called the agents SDK which has also gotten extremely

extremely popular. Um this allows you to

extremely popular. Um this allows you to use you know the responses API or some other API endpoints that we have to build what you might more traditionally think of as an agent like a you know an AI kind of working in an infinite loop.

It might have sub agents that it delegates to. It starts building all

delegates to. It starts building all this framework all this scaffolding actually. You know we'll see where this

actually. You know we'll see where this all goes. Um, but it makes it a lot

all goes. Um, but it makes it a lot easier for you to build these these these these kind of agents, giving it guard rails, allowing it to like farm out subtasks to other agents and and kind of like orchestrate a swarm of

agents. Uh, the agents SDK uh kind of

agents. Uh, the agents SDK uh kind of allows you to do that. And then above that, uh, we've now started building tools to help also with kind of like the meta level of deploying an agent. Uh so

we have this product called uh um agent kit uh uh uh and widgets uh which are basically a bunch of UI components that you can use to very easily um build a

very beautiful UI um on top of uh uh either our API or agents SDK um because you know a lot of times these agents kind of look very similar from a UI perspective uh and so there's agent kit

we also have a smattering of like uh eval products like an eval API where if you want to test and like you know see if your models or your your agent or your workflow was working. Uh you can test it in a very quantitative way um

using our EDOLs product. And so yeah, that I I view it as like these these various layers. They're all kind of

various layers. They're all kind of helping you build um what you want um with our AI uh with our models um and with increasing levels of abstraction and and and and uh you know how

opinionated it is. And so um you can start you can do you can use the whole stack and and it it very quickly allows you to build an agent or you can go down down the stack as low as you want to basically responses API and build

whatever you want uh because of how low level it is.

>> Sherwin, is there anything else that you want to share? Anything else you want to leave listeners with? Anything we

haven't touched on that you think might be helpful before we get to our very exciting lightning round? The only thing I' I'd leave folks with is yeah, I think um I think the next like two to three

years are going to be some of the most fun uh in tech and in the startup world uh that that we'll have in a very long time. And uh I would just encourage

time. And uh I would just encourage people to not uh not take it for granted. Like I I entered the workforce

granted. Like I I entered the workforce in 2014. It was great for like a couple

in 2014. It was great for like a couple years. I felt like there was like a

years. I felt like there was like a period of like five to six years where it wasn't very exciting in tech. Uh and

then in the last three years has just been the most insanely exciting energizing period uh of my career and I think the next two to three years are gonna be a continuation of that. And so

uh would encourage people not take it for granted. I'm trying to not take it

for granted. I'm trying to not take it for granted. At some point you know this

for granted. At some point you know this wave is going to play out and it's going to be a lot more you know incremental.

Uh but in the meantime we're going to get to explore a lot of really cool things, invent a lot of new things and change the world and change how we work.

And so uh that's the main thing I' I'd leave folks with. I love this message. I

want to spend a little more time on it.

Um, when you say don't miss it, is it what do you recommend people do? Is it

just build, lean in, learn, join a company building really interesting things? Like what's what's your advice

things? Like what's what's your advice to folks that are like, "Okay, I don't want to miss the boat." Yeah, I would just say engage with it. So, it's

basically like what you said. Um, lean

in. Um, building, uh, tools on top of this is is part of the, you know, it's part of the story. Um, just using the tools like you don't, you know, you don't need to be a software engineer to to lean into this. Um, all I think a lot

of jobs are going to going to going to change here. So just using the tools,

change here. So just using the tools, understanding the limitations of what it can and cannot do so that you can kind of watch the trend of what it can start to do um as the models improve and yeah and so it's basically like getting used

and getting getting used to the technology and getting familiar with it instead of kind of like laying back and uh uh uh letting it letting it pass you.

>> On the flip side of that, there's a lot of I think stress and just anxiety around like there's so much happening.

How do I keep up? I got to learn cloudbot this week. Oh god. What is

there something you've learned about it just not like you're at the center of this? How do you not get overly stressed

this? How do you not get overly stressed and worried about missing things that are going on and just stay on top of news? What what are some things you've

news? What what are some things you've done learned?

>> Yeah, so I I think I'm personally a bad example of this because I am I'm basically chronically online uh on X and uh our company Slack. So I I I actually try and absorb I end up absorbing a lot

of it. What I will say though is just

of it. What I will say though is just like from observing other folks who are less, you know, addicted to this stuff like I am. Um, yeah, a lot of it is noise. Like you don't need to you don't

noise. Like you don't need to you don't need to have like 110% of this kind of pass your mind like like go into your mind. Honestly, just leaning into like

mind. Honestly, just leaning into like one or two different tools starting small is already like you know more than you need here. I think just the combination of like the frenetic pace of

the industry X as a product just creates like this insane kind of like um uh uh like yeah this insane like pace of of news which is honestly very

overwhelming. Uh the main thing is like

overwhelming. Uh the main thing is like you don't need to be you don't need to know all of that to to really engage with what's happening right now and even something as simple as just like install the codeex client and play around with

it. install Chad GBHEN, connect it to a

it. install Chad GBHEN, connect it to a couple of your uh you know internal uh uh data sources, notion, slack, github, and see what it can and cannot do. Um

all of that I think is a part of it.

Amazing. Sherwin, with that we've reached our very exciting lightning round. I've got five questions for you.

round. I've got five questions for you.

Are you ready?

>> Yeah. Yeah, absolutely.

>> First question, what are two or three books that you find yourself recommending most to other people?

>> Oh, I'll I'll talk about one non-fiction one and one fiction book. Uh the fiction book was I just finished reading it. I I

I it was really I I really recommend it.

It it's uh uh there is no anti-mimetics division by Q&M. Uh it's a uh I think it's like an online author, but I saw it being shared on X. Uh this this uh it's

like a science fictiony kind of book. Um

and it was I basically devoured it in like two days. Um it was it's super super well written, super fascinating.

It's about a government agency that's fighting, you know, things that make you forget it. Um, and so it's just a very

forget it. Um, and so it's just a very like smart like creative book that that and fresh uh honestly in terms of like source material uh that that that I really like. So I'd recommend that one.

really like. So I'd recommend that one.

Uh the book is also unintentionally hilarious. So like it's like meant to be

hilarious. So like it's like meant to be like this like sci-fi almost like horror style book, but it was it was it was uh it made me laugh a couple times. So uh

that's the that's the um fiction book non-fiction. So I'm going to cheat and

non-fiction. So I'm going to cheat and I'm going to recommend two of them. So

in the last year I've been reading a lot more about China and kind of like the US China relations. And I think there are

China relations. And I think there are two books that came out in the last year that have been you know really really eye opening for me in in that regard.

First one is the Dan Wang book breakneck. That one was really really

breakneck. That one was really really good. I really liked his analogy of like

good. I really liked his analogy of like the lawyerly US is the lawyerly society.

China is the engineering society uh and their pros and cons to each. I read it and I was like hm yeah does does seem like we're run by lawyers uh in the US.

So that's one. Uh and the other one is the Patrick McGee book on Apple and China was super super interesting. I'm a

huge Apple fanboy. Like if you could see my uh desk right now, it's it's all Apple stuff. But just like one, it was

Apple stuff. But just like one, it was just super fascinating learning about Apple's relationship to China. And then

two, it just like had a lot of inside information about Apple as a company that I found fascinating. So it was also quite a page turner and um also, you know, very very timely a timely book as well.

>> The antiimetics book sounds amazing. I'm

buying it right now as you're talking.

>> Yeah. Yeah. Yeah. It's it's like I think it's only like couple hundred pages. I

literally finished in two days. It was

just like so so good.

>> Okay, great tip. Okay. Uh favorite

recent movie or TV show you have really enjoyed?

>> Yeah, that one's tough cuz you know with I I have two kids and uh uh a busy job and so I really haven't had much time um to watch TV shows. Uh I will say in the

last couple weeks I watched a couple episodes. I'm actually a big anime guy

episodes. I'm actually a big anime guy and so uh I I watched a couple episodes.

There's a new season of this anime called Jujutsu Kaisen uh that's out. Uh

so season 3 of JJK. uh was was was really good. Um, in general, uh I'm a

really good. Um, in general, uh I'm a huge uh fan of uh Japanese anime. I

think they create the most uh novel and unique uh plots uh and universes that uh western media has shied away from. Um

and so uh generally a big fan of that but yeah it haven't really watched much but saw a couple episodes of JJK recently.

>> Extremely understandable in your role.

>> Yeah.

>> Favorite product you recently discovered that you really love?

>> Yeah. Okay. So, so, uh, so I recently, uh, had to set up Wi-Fi and like home networking, and I went all in on Ubiquiti, uh, routers, um, and C security cameras. I had never heard of

security cameras. I had never heard of it before. I had to do this. I always

it before. I had to do this. I always

just had a very simple setup. Uh, and it is just such a well-built product. Uh, I

don't know if you used it before, but it's basically like the Apple of like home networking. So, uh, beautiful

home networking. So, uh, beautiful products. Uh, but the thing that

products. Uh, but the thing that actually makes it extremely good is it software is good. Uh, and so they have a really great um, mobile app to help manage, you know, uh, all of the the

home networking. Um, and so basically

home networking. Um, and so basically Ubiquity, you can use it to buy uh, wireless routers. Um, you need Ethernet

wireless routers. Um, you need Ethernet uh, wiring throughout your house to use it. Um, but I actually think what makes

it. Um, but I actually think what makes it really good are security cameras. So,

if you have security cameras that are plugged into the Ubiquiti ecosystem, they have an incredible mobile app. Uh,

and Apple TV app and iPad app um to kind of see the live feed of your cameras and and so uh they're they're they're a little pricey, but not that pricey. Uh

but it's been just an incredible product experience.

>> All right, I went EO so I made a mistake. Good tip.

mistake. Good tip.

>> EOS are pretty good, too. But, uh fully converted to Ubiquity at this point.

>> Good tip. Okay, two more questions. Do

you have a favorite life motto that you find yourself coming back to in work or in life?

Yeah. Uh the one that I always, you know, repeat to myself is, uh, uh, never feel sorry for yourself. There's a lot of things that are going to happen, you know, uh, at work, uh, in life. Uh, and

reminding yourself to never feel sorry and that you always have a sense of agency to kind of pull yourselves up is something that I've had to tell myself a lot and, um, also something that I repeat to to to to a lot of other folks

as well. Last question. So, in your

as well. Last question. So, in your previous life, you worked at Open Door where you led work on basically figuring out how much to uh pay for houses. You

basically built a model that told the company, "Here's how much we'll pay for this house." What's like a variable in

this house." What's like a variable in the price of a house that you didn't expect is really important and impacts the price of a house? There's a bunch that were surprising. I'll I'll maybe

list the the the the couple of mo most uh uh interesting ones. um power lines and like uh high voltage power lines like are super super uh actually impact

your price quite a lot. I didn't really fully internalize this until I went to like Dallas and observe like when your house sits next to one of these giant like you know voltage lines is like buzzing and most people have families you don't want your kids kind of near

there. Uh so I think that was one that

there. Uh so I think that was one that really really uh kind of surprised me.

>> That makes sense.

>> Yeah. And then the other one which which was something that uh was always something really difficult for us to uh quantify uh was floor plans. Uh and so it is very important like yes of course

it's really important but just like quantifying what a good floor plan is like and what a really bad floor plan is like we were doing all these things of like how wide is the kitchen and like is it a what style of kitchen is it and

then like where's the master bedroom and and so it was just really really hard to quantify but I remember floor plan was a big one because like we'd have a home that like wouldn't sell and then our uh ops team would go in and be like yeah it's a floor plan issue so like how do

you how could you tell it's like you go inside you just feel it it feels you know the floor plan feels Uh so yeah, th those are ones that were uh surprising. And then the last one

uh surprising. And then the last one that was more impactful than I thought is um general like curb appeal and like even like the front door. Uh and so I actually think there there's a Zillow book on on this where the front door

replacement tends to be the highest ROI uh for homes. Um but just like the feel of like as you walk up to the home as a buyer, what you're interacting with and the first moments of the house, I think

was uh I'd underrated its importance.

That is extremely interesting. Uh, and I love that you had to figure figure how to do all this in code and not walk floor plans. I have a bunch of stories

floor plans. I have a bunch of stories around like for floor plans there's like there's like uh it's not digitized. So

there's like a handful of people who have like paper floor plans uh of like all these homes in like Phoenix and Dallas. Um yeah, a lot lot of fun fun

Dallas. Um yeah, a lot lot of fun fun stories from the open door days.

>> Okay, Sherwin, uh thank you so much for doing this. This was incredible. Uh

doing this. This was incredible. Uh

where can folks find you online and uh and how can listeners be useful to you?

Yeah. So, I'm uh online on on Twitter on X. I'm just Sherwin Woo and uh yeah, I

X. I'm just Sherwin Woo and uh yeah, I mostly just tweet about uh OpenAI and the API and some of the products that we're launching. Uh and then how folks

we're launching. Uh and then how folks can be uh can be useful to me. Uh I love hearing about things that people are building and so if you're working on a startup, if you're hacking on an idea, you know, would love to uh just reach

out to me on X. Um I would love to hear about uh what you're building and and learn about how open I can help support you.

>> Amazing. Sherwin, thank you so much for being here. Yeah. Thank you, Lenny.

being here. Yeah. Thank you, Lenny.

>> Bye, everyone. Thank you so much for listening. If you found this valuable,

listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider

podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at

lennispodcast.com.

lennispodcast.com.

See you in the next episode.

Loading...

Loading video analysis...