LongCut logo

From Idea to $650M Exit: Lessons in Building AI Startups

By Y Combinator

Summary

## Key takeaways - **Identify demand by observing current spending**: Instead of guessing what people want, look at what they are already paying for. This includes tasks currently performed by customer support, paralegals, or personal trainers, as these represent existing market demand that AI can address. [03:39] - **Three categories for AI startup ideas**: AI startup ideas can be categorized into assisting professionals, replacing entire job functions, or enabling previously unthinkable tasks. Each category offers significant market potential, with the latter two tapping into much larger addressable markets than traditional software. [04:30], [05:13] - **Build reliable AI by mimicking experts**: To build reliable AI products, break down tasks into the specific steps an expert would take with unlimited resources. Then, translate these steps into code or detailed prompts, focusing on deterministic workflows where possible to ensure accuracy and consistency. [10:16], [11:16] - **Rigorous testing is key to AI reliability**: Achieving high accuracy in AI applications requires extensive evaluation and testing. Develop a robust eval framework, iterate relentlessly on prompts, and use real customer data to refine the AI's performance, aiming for over 97% accuracy in production. [15:43], [19:00] - **Product quality drives marketing, not vice-versa**: While marketing and sales are important, the quality of your AI product is paramount. An exceptional product naturally generates word-of-mouth and media attention, making sales efforts significantly more effective and reducing reliance on expensive marketing campaigns. [24:20], [24:48] - **Product is more than just code**: A product's success isn't solely defined by its user interface or code. It encompasses the entire customer experience, including support, training, and human interactions. Investing in these surrounding elements is crucial for user adoption and overall product success. [29:48]

Topics Covered

  • AI unlocks markets by replacing paid human tasks.
  • AI democratizes access to expensive human services.
  • Relentless evaluation is key to reliable AI.
  • Deep implementation creates defensible AI products.
  • Great product quality beats marketing and sales.

Full Transcript

What we're going to talk about today is

how my company uh built an AI app that

was so good we're able to bring it to an

exit for $650 million and how you can do

that too. All right, so really we're

talking about three big ideas today. The

first is

what ideas to pick. How do you decide

what to pursue? Second is how you

actually build it. And third, and

honestly often overlooked, is how you

take that thing that you built and

market and sell it successfully in the

market. Before we dive into this, a

little bit about me so you know who's

talking to you. I grew up a coder. Uh

I've been building stuff since as long

as I can remember. It's probably the

same as basically everybody here. Bit of

a side quest for me, but I fell in love

with law and policy and I became a

lawyer. And I had a pretty conventional

though brief legal career. uh law

school clerkship

you know, big law firm, etc. I think

like anybody who builds stuff and then

goes to one of these old professions

like law or accounting or finance or

whatever, the first thing you find out

is I cannot believe that they were doing

it this way. And so I immediately left

that and founded a company called Caseex

in 2013 when I think uh a lot of you

were about turning eight. And maybe as a

side note, that's about how long it

takes sometimes for these companies to

be successful. So, I know you're, you

know, 18, 19, 20, 21, 22, whatever old

right now. Be ready to sign up for one

of the most amazing adventures of your

life when you start a startup, but it

takes time. At KStex, we've been focused

for, you know, the vast majority of our

experience on a deep conviction that AI

when applied to law can make a huge

difference. And by the way, it wasn't

even called AI when we started focusing

on it. It was called natural language

processing, maybe machine learning. But

one of our AI researchers who is here

today uh Javeed saw woo saw an early

application um as soon as the BERT paper

came out attention all you need etc.

this like seven years ago of how AI

technology could apply to uh making

lawyers lives better for example making

search a lot better because we were so

focused on large language models and

were researching deeply in this space we

got really early access to GPT4

like summer 2022 we were like $20

million in revenue we were doing great I

had like 100 people and we stopped

everything that we were doing and said

we're going to build something totally

new based on this new technology and

that became a product called co-consel

which was the first ever and I think

still the best AI assistant for lawyers

for reasons I'll go into the rest of

this talk we were acquired by Thompson

Reuters uh not about two years ago for

$650 million in cash by the way that

feels like a big number but I think for

a lot of folks in this room you're going

to look back at this talk and be like I

can't believe that was a big number back

then you guys are going to be able to

build things that are so much more

valuable I I really believe that and I

think that's because the what AI is

going to unlock for all of you is the

ability to build amazing stuff in this

for this world. So, okay, how do you

pick an idea?

something people want. And the reason

they had that saying is because it's

genuinely difficult to know what people

want, especially in like the old world

of binning software. You kind of like

have to build something, get it in users

hands, and try and fail a lot of

different times. And you just hope that

it's something that people actually want

to use. So that's why the saying for Y

Comier is make something people want. I

actually think it just got a lot easier

because what do people want? Well, what

do people want? For example, things

they're paying for right now. People are

currently paying people to do tasks,

right? In this case, it's a bunch of

very unhappy like customer support

people or something like that. But we

already know what people want because

they're paying people to do it. This

includes a lot of work like customer

support or insurance adjusters or

parallegals or in you know things you do

in your personal life like personal

trainers or executive assistants or

whatever. That is what people want. And

so the the problem of choosing what

people want just got a lot easier

because now you just have to look what

are people paying other people to do uh

for a lot of those problems either you

know traditional AI like LLMs can solve

many of the problems that people work on

right now and if not that then robotics

can solve a lot of things that people

are working on in the physical world and

what I think you're going to see as you

decide what you're going to build you

you first pick an area to target it

really kind of falls under three

different categories one is like

assistance

where say a professional needs help

accomplishing a task. That's what we

built at co-consel. Lawyers need a lot

of help reading a lot of documents,

doing research, reviewing contracts,

marking them up, making red lines,

sending them to opposing council. So

that's one big category is assisting

people doing their work.

The second big category is just

replacing the work altogether. People

currently hire lawyers. What if we just

became a law firm powered by AI? people

currently hire accountants and find

financial experts and physical

therapists and and and you know people

to fold your laundry whatever it may be

right you can just replace that task

using AI and finally the third category

is you can do uh things that were

previously unthinkable right like for

example at law firms they would have

hundreds of millions of documents and

they would never think in a million

years I should have people read over

every single document and categorize it

in certain ways and summarize it and

index it, etc. It just would be insane,

right? It cost them millions and

millions and millions of dollars. But

now that AI is here, you can have

thousands of instances of Gemini 2.0,

Flash, or whatever, read over every

document. The previously unthinkable is

now thinkable. These are basically the

three categories um of ideas to choose.

And what I think is incredible about

this is the amount of money to be made

with these new kind of categories each

has gone way up. It used to be that

what's called the total addressable

market, which is basically how much

money you can make from your product was

the number of like professionals, for

example, number of seats you can sell

times the dollars like $20 per month or

whatever, right? And by the way, a lot

of many billion dollar companies are

built selling seats to x number of

professionals. But today, the actual

amount of money that we already know

people and companies are willing to

spend

is the combined salaries of all the

people they're currently paying to do

the job. And that number is like a

thousandx bigger. You pay $20 a month to

solve a problem. For example, you know,

pay a typical SAS kind of subscription,

but you might pay five or 10 or even

$20,000 a month to certain professionals

to solve problems for you. So the amount

of money that you can make with your new

applications with AI has gone up by a

factor of 10, 100, or even a thousand

compared to what it used to be. I want

to take a quick moment because it might

sound like pretty dystopian like we're

talking about taking all these salaries

and these these become, you know, your

addressable market. I think it's kind of

the opposite. I think it's beautiful. I

think the f the future is beautiful for

two reasons. The first is that you're

going to unlock a future when you

replace or substantially assist certain

jobs. Like people used to Sam Alman

wrote about this in a recent essay.

People used to have a job called lamp

lighters where we didn't have like you

know electricity and lights. So people

go around with a like matchick or like

you know lighting all the lamps at night

on and then turning them off at night by

putting out the candles, right? That's

what things used to be. And we couldn't

even imagine the kind of stuff we're

doing now because uh that's what we were

stuck doing in the past. So you going to

unlock a future that we can't even

imagine today when we you know move past

the roles that we're currently doing

right now. It'll feel antiquated 10 or

15 or 100 years from now to do the kind

of things we're doing today because

you're going to help us move past that.

But as importantly, what I think some

people don't think about with this

stuff, which I think is very true, is

you're going to democratize access to

things that were used to be really,

really hard or very expensive. In the

field we worked in in law, over 85%

of people who are low income don't get

access to legal services. It takes way

too long and it's way too expensive

working with human lawyers, right? But

if you could help make lawyers 100x

faster and 10x cheaper or you know

frankly just provide those services

yourself as a new law firm powered by AI

then all of a sudden saying where where

lawyers have to turn away clients

because they did not have enough money

you can now say yes and that applies

everywhere everybody should get the

world's best financial assistant

everyone in the world should get the

best executive or personal assistant

everyone in the world you know can

already have the best coding assistant

in tools like curs cursor and wind surf

etc right I do think that despite the

fact that I'm telling you how to pick an

idea is you should potentially replace

jobs, I think you're going to do

something really amazing for the vast

majority of consumers and enterprises uh

by unlocking a better future and by

democratizing access to things that used

to be only for the value very wealthy.

Okay, so that's that's how to pick an

idea, pick a job, replace, assist or do

the unthinkable um those previously

unthinkable and build a better future.

But how do you actually build this

stuff? I'm going to give you a quick

outline of how we built it. What's kind

of nuts to me is everything I'm going to

say right now may sound very simple and

common sensical and maybe even obvious,

but the craziest is nobody's doing

it. Like, nobody's picking ideas the way

that I recommended in terms of picking

job categories. There's very very few

companies out there doing that. And even

fewer companies are doing what I hope

will look like pretty obvious and simple

things to building um reliable AI. I put

it reliable and underscore for what it's

worth because that's going to be the key

for for many circumstances in terms of

getting from a cool demo as Andrew was

saying earlier today something that

actually works in practice. Here's like

four quick points about how to actually

build this thing.

The first is think about like making an

AI assistant or an AI replacement for

say a profession.

Ask yourself like what do people

actually do? What does a professional in

this field actually do? What does a

personal trainer or fitness coach do if

that's the app you're deciding to build?

What does a financial assistant do or

financial analyst do? And be like super

specific.

I'm going to say this a few times, but

it is really helpful to actually know

this answer, not like make it up. It was

helpful for us that that I was a lawyer,

my co-founders were lawyers, 30 to 40%

of my company, even the coders were

lawyers because we actually lived it.

That may not be the case for you. Just

go be like an undercover agent

somewhere. Really learn what happens at

these companies, right? What do these

people do? Other way to do it, by the

way, is you might be the tech talent and

you might find yourself a co-founder

who's a has some deep expertise in a

field. But whatever way you get there,

you know, find out what what are the

specific things that people do that you

can assist or replace. And then ask

yourself this question. How would the

best person in that field do this if

they had like unlimited time and

unlimited resources like a thousand AIs

that can all work in, you know,

simultaneously to accomplish this task,

right? How would the best person do this

and work backwards from there, right?

What are the actual steps that somebody

might take to accomplish a task? For

just give you an example from our legal

field, we did a version of deep research

two and a half years ago. uh as soon as

we got access to TPD4, it was like the

first thing that we did and we asked

like what was the what was the best

lawyer going to do if given this

research question. It wasn't like just

generally research like what does that

even mean? They broke it down to steps.

Okay, first you know they get a request

for this research project and they say

okay well I need to understand what this

really means. They might ask clarifying

questions quite like deep research today

if you've used it. And then they might

make a research plan. They they might

execute dozens of searches might that

might bring back hundreds of different

results. They'll read every single one

of them very carefully. Kick out the

stuff that's not relevant because search

results are sometimes have irrelevant

stuff. Bring in the stuff that is

relevant. Make notes about what they're

seeing, right? Why is this relevant? Why

is this not relevant? Where does this

fit into my answer? And then based on

all of that, put together, put it all

together in an essay. and then maybe

even have a step at the end where you

check the essay to make sure it's

accurate and reli you know actually

refers to the right resources um etc etc

etc. These are the kind of steps that a

real professional might do when doing

research. So write them down. Now you

turn to code. Most of these steps for

the kinds of things you'll be doing end

up being prompts. One or many prompts,

right? One prompt might be read the

legal opinion and decide on a scale of

zero to seven, how relevant is it to the

question that's being asked. One prompt

might be given all these notes I've

taken in all the cases I've read so far,

write the essay. One prompt might be

like, here's a here's a footnote in the

essay, here's the original resource. Is

this thing, you know, accurately cited

or not? The reason why that many of them

are prompts is because they're the kinds

of things that would once require human

level intelligence, but now you're

injecting it into like a software

application. So now you need to, you

know, do do the work of turning it into

a great prompt. I'll talk about in one

second to actually do that human level

intelligence. By the way, if you can get

away with it not being a prompt, if it's

like deterministic or it's like a math

calculation or something like that,

that's better. prompts are slow and

expensive. Tokens are still expensive.

So when you're breaking down these

steps, some of these things might just

be good old software engineering, right?

Do that when you can. And then here you

make a decision when you find out how

the best person would approach this. If

it's a pretty deterministic like every

single time they always do this task,

they always follow the same five steps.

Simple. Make it a workflow, right? It's

actually the easiest outcome for you.

And to be honest, a lot of the stuff

that we built while building code

council was exactly like this. Every

time you do this task, you're basically

going to take the same six or seven

steps. And you don't need to have

frankly like lang chain or

whatever. Just Python code. This

function then the output of this

function goes in this function output of

this function to this function. Boom.

You're done. Right? Simple. Sometimes

it's not so simple. Sometimes how expert

would approach the problem really

depends on the circumstances. Maybe they

need to make a very different kind of

research plan, pull from different

resources, run different kinds of

searches, read different kinds of

documents, whatever it may be that

you're doing, right? That's how you get

to something that's a little bit more

agentic. That's harder to make sure it's

good. But maybe what you have to do,

right? Underscore this again in doing

all of this, having some form of domain

expertise, somebody who knows what

they're talking about here, which by the

way, you can also acquire just by

talking to a lot of people. There lots

of different ways to get here, but don't

do it. Don't don't fly blind. Don't

assume this is the way that all

government employees in this field do X

really know. Okay. So that's the basic

way you can build these AI capabilities

that start to round out and that's it

right simple. The hard part frankly

isn't building it. The hard part is

getting it right. Like how do you know

the research was done well? How you know

it read the document right? How do you

know it edited you know it did the

insurance adjustment correctly? How do

you know it made a correct prediction

about whether to buy or sell a sock or

whatever it is that you're doing? This

is where evaluations play a very very

very large part. And this is the thing

that I see most people not doing because

they build like demo level stuff that

frankly is like 60 to 70% accurate. And

if we're being honest, you can probably

raise a pretty good round of capital by

showing your cool demo to uh VC

partners. And you can even possibly sign

on your first few customers with the

cool demo as a pilot program, right?

But then it doesn't work in practice.

And so all that excitement and VC

capital raised and pilot program

excitement, etc. uh falls apart if you

can't make something that actually works

in practice. And making something that

works in practice is is really hard

because uh LLMs like people, you know,

you don't have your coffee that morning,

uh you wake up on the wrong side of the

bed, it might just output the wrong

stuff for prompts. I'm sure you've all

seen this before. Even if you just use

chat tpt, you sometimes probably been

blown away with its brilliance at times

and other times shocked by how

incredibly wrong it was about code or

you know some informationational lookup

or just hallucinating when George

Washington's birthday was or whatever it

is right so so how do you deal with that

I'll tell you how we dealt with it um

this is not the whole answer but a big

part is evaluations

next batch is now taking applications

got a startup in you apply at y

combinator.com/apply

by it's never too early and filling out

the app will level up your idea. Okay,

back to the video.

>> This all begins again from domain

expertise which is like what does good

look like? What does it mean to do this

task super super well? Um if you're

doing research, what is the you know for

for X given question, what is the right

answer? What must the what must the

right answer include for X document? And

you're asking a question that's a

document. What must it pull out of that

document. What pages should I find the

information? What does good look like?

This is true of the overall task like

complete this research for me, but also

each microtask necessary to complete the

overall task like which which search

queries are good search queries versus

bad search queries. Here again, not

sounding a broken record, but it's good

to know what like actually prof actual

professionals would say about this,

right? So, what does good look like? And

then those become your evals. My

favorite thing to do when I'm writing

evals for things that are like, you

know, when when possible is to turn into

like a very objectively gradable answer.

For example, uh have the AI just output

true or false or a number between zero

and seven or whatever because then it's

really easy to evalu.

That's how relevant it is. It's not a

seven, not a five, it's a six. And if

you have that then you can set up an

eval framework I like prompt fu I don't

know if you guys use that it's like open

source runs on on your command line

there are many frameworks out there that

you can use to you know put together the

these evaluations doesn't really matter

at the end of the day it's like for this

input and this prompt the answer should

be six make like a dozen try to match

what your customers are actually going

to throw at your program right make a

dozen and then try to get it perfect on

a dozen, then get to 50, then get to

100, and keep on tweaking the prompt

until um it actually passes all the

tests you keep on throwing at it. If

you're doing really good about this,

have a hold out set and don't, you know,

look at those while you're while you're

writing your prompts. Make sure it also

works on those. You're not just just

fine-tuning the prompt just for your

evals, right? What you'll find without

any I use the word fine tuning without

any like technical fine-tuning you can

go so far with just prompting if you're

being really careful about this you will

find that the AI gets things wrong

predictably you're ambiguous as part of

your prompts you're not giving it clear

instructions about doing one thing or

maybe it just constantly fails in a

certain direction you have to give it

direct give it you know prompting

instructions to pull it back from making

this kind of error you give it examples

right to to guide it away from certain

classes of error errors, but it's not

like going to be a surprise why or how

AI fails. Once you start prompting,

you'll start to see patterns that you

can prompt around to give instructions

around. And what I like to say is like

the biggest qualification for success

here is whether you or whoever is

working on the prompts of your company

is willing to spend two weeks

sleeplessly working on a single prompt

to try to pass these emails. If you're

willing to do that, you're in a really

good place, right? It just it just takes

such a grind because the thing is you're

going to do these emails and at first

you're going to pass like 60% of the

time. And at this point most people just

give up. They're like, "AI just

can't do this task, right? They're like,

I just can't. I'm not going to do it."

And then you'll spend a night prompting

and you're going to be at 61%. You're

like, "Oh my god." The next group of

people will give up at this point. What

I'm here to tell you is that if you

spend like solid two weeks prompting and

adding more evals and prompting, adding

more evals and tweaking your prompt and

tweaking your prompt, tweaking your

prompt, you're going to get to something

that passes like 97% of the time. And

the 3% is kind of explainable. It's like

a human would it's like a judgment call

almost. Humans make similar kind of

judgment calls. Once you're there, you

can feel pretty good about how this

might interact in in in uh production.

What I recommend is like pre-production,

maybe in like beta, get to a 100, you

know, tests per prompt and 100 tests for

the overall task. If you're passing like

99 out of 100, again, you should feel

pretty good about where you are, right?

So, that's a just rough guide. If you

can beat a thousand, that's 10 times

better. Do that. But it's hard. It's

actually really hard to come up with

great evals. So, I'd recommend just at

least 100, go to beta and put it in

customers hands and set the expectation.

By the way, this is not yet perfect,

that's why you're in a beta.

And then you listen and learn. Every

time a customer complains, either you

have their data because that's how your

app is set up, or you ask them like,

"Hey, can you share that document and

that question you asked to see why it

failed?" That's a new test. We've added

much more eval at this point from real

things that happened to real customers

than the ones we came up with in the

lab. And that's going to your customers

are going to do the dumbest with

your app. Okay? and they're going to do

such dumb things that you'd not predict.

But that's what customers really do. If

you've ever seen like a real person's

Google queries, they're barely legible,

you know? And I'm assuming the same

thing is true of chatbt. They see a

bunch of stuff. Like your prompts

probably look pretty smart. Most people

are like burrito me how ouch or

whatever. Like what do you do with that?

Right? But you have to try to bring back

a great result and determine what

they're actually trying to say with

these ridiculous prompts. So do it like

those become your real tests and just

keep iterating. This is not a static

thing. New models will come out. Try the

new models. Prompt fu and other

frameworks make this really easy. Add a

new model. It'll compute how well it

does against your prompt so far. Keep

tweaking your prompts. Um sometimes the

addition or subtraction of a single word

might move you up a single percent, but

that's a very big deal if you're working

in a field like finance, medicine, law

where single percentage increases in

accuracy really matter to the the

customers you're serving. Right? Keep

iterating. Never stop. There should be a

new GitHub pull request like every other

day or every day on your prompts. And

I'm telling you, if you just do those

two last slides,

you know, how do the professionals

really do it? Break it down to steps.

Each step basically becomes a prompt or

piece of code. And then you test each

step. Test the whole workflow all

together. If you just do these two

things, you'll be like 90% of your way

there to building a better AI app than

what most of the crap that's out there,

right? Because most people never eval.

and they never take the time to figure

out how professionals really do the job.

And so they make these kind of flashy

demos on Twitter. They maybe even raise

capital and they may even be some of

your like your heroes for a minute, but

be careful who chooses your heroes. The

real people are behind the scenes

quietly building, quietly making their

stuff better every single day. If you

just do these two slides, you're going

to be 90% of the way there and and

better than most of the things that are

out there. That's the craziest part.

Okay, now the hardest part, honestly.

the part that frankly we we are still

trying to figure out postexit you know

at a multi-billion dollar company uh

it's still going to be really really

really hard and I'm going to give some

tips about marketing and selling AI apps

in this new kind of world where you're

maybe replacing or assisting a job

things that we've learned along the way

but the first thing I'll say

this is a little bit counter to what I

think is out there in a lot of the VC

kind of a lot of people like say like

the most important thing is sales and

marketing a lot of people really really

think that when you guys series A's and

series B's, you'll have people on your

board who say product doesn't really

matter that much if you're really good

at marketing and selling. And they've

seen some examples of this working out

like really well. I think it's

We for 10 years we had an okay

product at first. We went through

different marketing and sales leaders,

some of them super, you know,

wellqualified, etc., and they did okay.

When we had an awesome product, all of a

sudden people were referring us by word

of mouth.

news was coming to us because we're

doing something genuinely new and

interesting, right? And that and word of

mouth and news is free marketing. Um

people coming to you like we had sales

people because we had sales people from

our older product that wasn't as good as

the new one that we came out with with

you know based on LLMs and I will tell

you those sales people became like order

takers. So the most important thing you

could do for marketing and sales is to

build a amazing product and then

making sure the world knows about it

somehow. obviously can't just like build

it and not show anybody. Tree falling in

the woods, nobody hears it. It's not

going to do anything. But I do think

that the quality of product matters so

much more than your series A and B uh

investors will say. So when you guys

have those lame VCs on your board, you

can think back to this talk and push

back. All right. Um but it's still

important. It's still important to

market and sell. I have just three

pieces of advice here. The first thing

is uh you might not be selling

traditional software anymore. Think

about how you're going to package and

sell it. The companies I'm most excited

about right now are taking real

services, like for example, reviewing

contracts for a company and they're just

doing it. They're like doing the full

service. Maybe there's a human in the

loop. And this would usually cost

somebody $1,000 per contract to review

if they went with a traditional law

firm. They're charging $500 per

contract. Again, for context, a lot of

the tools you guys use right now

probably 20 bucks a month. $20 per per

month versus $500 per contract. We're

talking about extreme step-ups in price.

Price it according to the value you're

selling it. Don't shortcom yourself.

It's maybe a little in conflict with

what I just said, but also listen to

your customers for how they want to pay.

Just ask them how would you rather pay

for this. I'll tell you what we found

out. We were thinking about a per usage

pricing like this review viewing

contract company and that that may work

in some cases where they prefer to pay

that way. That might work. But when we

asked our customers, they said, "Listen,

I'd rather pay more, but make it like

consistent throughout the year, then

potentially pay less and pay per use."

So, our customers wanted to pay $6,000

per seat. They wanted per seat, and they

want to pay $6,000 per se, 500 bucks a

month. Fine. It's a situation where our

customers wanted wanted predictable

budgeting. Give it to them, right?

Listen to your customers.

The third thing to really think about

when you're marketing and selling is all

this AI stuff is new and scary. These

big companies even, they want to dip

their toes in the water. They want to

try new things. Their CEO is like

sitting on a board of people at a

Fortune 500 company. The whole board is

like, "What are you doing about AI?" And

so their CEO is going to this company of

like 20,000 people. What are we doing

about AI? And they're like, "I don't

know. I'm trying like Greg's product."

Okay. They want to they want to try your

product. But there's also this trust gap

because they used to do this thing by

asking people and they can fire people,

they can train people, they can coach

people like people are not perfect, but

they're used to them. They are not

they're not used to using your product

yet. They have like no idea what to

expect. So, how do you build trust? Some

really smart companies are doing like

head-to-head comparisons. Keep your law

firm and then use our thing side by side

and then compare. How fast are we? How

good were we? How different were the

results? keep your accountant use our AI

accountancy and then compare like how

different how offer we in our accounting

or tax accounting or whatever it is

offer that that's a great way to build

trust compare it against people um do

studies do pilots there are so many ways

that you can do this but think think in

your head how do I build trust with my

customer and finally the sale does not

end when they've written the check and

definitely not when they started a pilot

what I'm seeing right now is like an

angel investor in this kind of post-exit

world for for me is there are a lot of

companies like our ARR is $10 million

and you dig under the surface and it's

like oh yeah we have a pilot for like

six months and they pay us a lot of

money for that pilot. Uh a lot of those

pilots are not converting to real

revenue and there's going to be a mass

extinction event uh as a lot of pilot

revenue. It's like instead of ARR is

like PR like pilot recurring revenue or

something that are not even recurring

just pilot revenue I guess like is is

not going to convert into real money and

that's a real danger I'd say for

startups right now even ones that are

reporting super high numbers in terms of

revenue big part of your job as a

founder and a part of a job of the

people you'll be hiring is making sure

that everybody uses the product really

understands it train roll it out

consciously and this is different for

every different industry for you know

onboard board them really thoughtfully.

Maybe that's in the app walking them

through steps so they try different

things. Maybe that's actually a person

sitting next to them. I don't know if

you caught this, but a very small kind

of throwaway comment that Satcha said

earlier today is that one of the most

like growing roles at startups is these

four deployed engineers, which I think

is a really fancy term for just like

boots on the ground people to sit next

to your customer and make sure the

product's actually working for them,

right? Whatever it takes. One thing I

said a lot in my company, I still feel

this is very true, is that your product

isn't just the pixels on the screen.

It's not just what happens when you

click this button. It's the human

interactions with your support, customer

success, with the founder, um it's

training, it's, you know, everything

around it. If you don't get that right,

then you might have the best pixels on

the screen, but you'll be beat by a

company that that invests more in their

customers and making sure that their

products are actually well used. That's

all you need to do to build a

awesome AI app and beat our $650 million

figure handily. All right, so open up

for questions.

>> Hello. Thank you so much for your uh

talk. I wanted to ask about the process

of choosing uh what kind of industry to

go into to try to create more automation

um in that way. So like if there are

already competitors in that space, would

you uh suggest looking at another

industry or would you suggest trying to

dive deeper into a niche of that

industry or like how what would you

advise in that situation?

>> So so I don't think you should care

about competitors at all. First of all,

for some of these spaces, the market is

so big because we're talking about like

how much how many trillions of dollars

are being currently spent on like

marketing professionals or support

professionals or whatever. There's not

going to be a single company that's

going to win this entire market for for

the vast majority of them. And frankly,

a lot of the times you're going to be at

first scared of your competitors and

then after you start building it, you're

going to be dumbfounded about how bad

they are and you're going to outbuild

them out. You run circles around them.

It's not about the competitors. But what

I will say is like kind of diving deeper

into like how to pick a market. The

things I'd look at is um what are the

kinds of roles that people are currently

outsourcing say to another country,

right? If it's something that they're

willing to do that for, then that's

probably a pretty good target for what

AI could take over. Uh if it's a role

where they feel like it's part of their

identity to do it in house, you know,

for example, I don't think you're going

to outsource for Pixar creating the

story of a Pixar movie, right? that is

that is their that is they they feel

whether they're right or wrong. Maybe AI

in two years will just like do better

Pixar than Pixar. But the people at

Pixar are going to feel very strongly

about the storytelling element. So, you

know, don't try to outsource that part.

Try to try to find the parts that are

already outsourced. For example, find

big markets, find where where there's a

pain point across many different

companies. Find um find things you know

about or can get access to information

about. Um these are the kinds of things

I'd be looking at uh while trying to

pick a market. But honestly, like

there's so many huge markets. You could

literally just print out like all the

knowledge work stuff if you wanted to

keep it digital. Throw a dart at

everything you point out. Whatever the

dart lands, just choose that market and

start running at it and I think you're

going to probably hit a trillion dollar

market. So, um, competitors or not,

don't care.

>> Thank you.

>> Perfect. Thanks a lot. So, um, Michael

from Switzerland, uh, I have a quick

question because you're a successful

founder and, uh, many of us are going to

found companies here. I wanted to know

how uh has your focus changed across the

different stages of companies from say

the preede what did you focus on versus

you know the C stages to the series A

stages and finally to the exit end which

part did you enjoy the most?

>> Uh it's a great question Michael so I'll

answer what I should have done and also

what I did do. All right

>> perfect thank you. What I should have

done is at the seed stage focus on

making a great product that gets product

market fit and then at the series A

stage focus on making a great product

that gets product market fit and then at

series B focus on making a great product

makes great product market fit and then

series C great product makes you know

you can see probably the pattern here.

What I ended up doing is I ended up

focusing on all kinds of other things

that didn't matter nearly as much as

those things. And I think if you start

from like you know because what is a

company outside of its product like it's

literally the service you're providing

to your customers is through the product

and if you focus almost entirely around

that and become obsessive around that in

my opinion um then a lot of other things

will follow for example what people do

we need to build a product that gets

product market fit now you have like HR

and recruiting etc to fill in for that

that answer how are people going to find

out about this amazing product that's

marketing and sales um what culture do

we need at the business to create a

product that people love and really use.

Now you have, you know, other parts of

HR and setting the culture, which is a

very important part of your job as CEO.

So you end up as CEO focusing on all

these different aspects by necessity,

but all to that one end. And what ends

up happening for a lot of founders

because they read like medium posts and

blog posts and they talk to their series

A and series B investors is they end up

focusing on HR or finance or fundraising

or whatever not as means to the end of

building great creating a big great

product that gets product market fit but

instead as an end to themselves like oh

we need to have a greatly great culture

in the abstract or we need to like now

we need to hire marketing and sales. I

did this. I fell into this trap. Big

mistake you know. Um I would I would

instead and this is I I'm very like as

you can tell I'm one founder is very

biased towards the product etc side but

I think I feel very strongly.

>> Hi Jake. So when I was 14 I sold my

startup to deote um and like you I'm

kind of looking for the next thing to do

like in the exit acquisition stage. If

you were here um at Y cominator startup

school what would you be doing tonight?

You know bar case text whatever you're

doing what would you be doing here

exactly tonight now that you're exited.

>> It's kind of amazing. I exit at 40 and

you exit at 14.

>> Yeah. So, uh, you're already well ahead.

It's awesome. Uh, actually, I

feel like in some ways for us in the

early days, focusing on legal made sense

for us because I knew legal, but also

was kind of a mistake because at the

time the legal software industry, Gree

LLMs is actually pretty small because it

was like a fraction like you know

lawyers make a trillion dollars a year

sounds pretty good, but how much of that

are they really spending on software?

And the answer is like a very small

amount. So no matter how well we did as

a company, we just weren't going to make

something that really changed that many

lives um that really made that much

money ultimately from a business

perspective. Um and we were only making

incremental changes to the workflow and

outputs of the people we were serving

pre-LM and postludden

helped many more people and made them a

lot more effective and changed many more

lives. And I will tell you having

existed in both spheres of making small

impacts on a small number of people um

and making only small differences their

lives and contrasted with you know

making a huge impact on many more

lawyers in our our case making them way

more effective and efficient replacing

some of the work they were doing with

LLM. Um the latter felt a lot better and

I'm kind of addicted to that now. But

I'd be focusing to long story short, I'd

be focusing on the biggest problem you

could possibly think of that is possibly

solvable with the technology and skill

set that you have. You know, like what

do people want? People want what do what

do businesses want? People want to be

like skinnier and not like lose their

hair. They don't want to do their

laundry. They, you know, want to have uh

everybody wants to have a cleaner show

up to their house for eight hours a day

and clean their whole house and make it

spotless, but you just can't afford to

do that. But could you make a robot that

does that for you? Right? Is that a kind

of pro product that can s serve

everybody in the world? In fact, is that

the kind of product that like the

dishwasher in the 50s could unlock a lot

of human potential because now people

who are staying at home to try to take

care of the kids are not having to clean

up the house anymore, right? Because

they can buy a $1,000 a year robot or

whatever. There is so much you can

unlock with like just thinking what is

the biggest problem that most people

face in businesses, you know, they want

to market their products, want to sell

their products, they want to make sure

that people are doing great work. They

want to replace certain parts of their

work with like more consistent, more

available. Like that's where I'd be

focusing my attention is just use a huge

problem that a lot of people have that

you feel like you can solve and just go

after it. Run as hard as you can.

>> Great. Thank you.

>> I think I have time for one more.

>> Hey, I'm Sabo. Then I was wondering if

you're making AI to be an assistant or

replacement for a human, you could price

that service based off how much time it

saves a human or how much you would

charge the human for as a salary. But if

you're making something that AI is doing

that humans could not possibly do, like

looking through hundreds of thousands of

like law documents per se, how do you

price such a service? And I I want to be

like really nuanced with what I said

earlier. I think at first you can start

charging what the humans charging and

then you'll have competitors, they'll

come in, they'll charge a little bit

less and then other competitors will

come in, they'll charge a little bit

less and it's kind of beautiful. how

capitalism works and it'll make the

service cheaper and cheaper and cheaper

and cheaper and at a certain point you

know if unless you're in a very

protected kind of space you will end up

charging a lot less than the people were

which I think is probably a good thing

at the end of the day for society right

bad for your business good for society

uh because now you can have the services

of a lawyer but for like 10 cents on the

dollar one cents on the dollar for that

new category of like you know I I would

I would start from what's the value

what's the value that you're providing

to the business start there if they're

going to save $100 million doing this or

would have paid $5 million to do this.

Okay, take 10% of that, 20% of that, you

know, have a conversation with your

customer. How much is you willing to pay

to solve this problem for you is

probably the best place to start. I

actually have time for one more question

rapid fire. It's a super fast one.

>> Hi, Jake. Uh, congrats on your exit. Um,

I know you probably get this question a

lot, but when you're building things

with prompts that are based off of

models that may not be proprietary, how

do you build defensibility and not end

up as a GPT wrapper? Basically,

>> my fastest answer. Just build it. And as

soon as you build it, you'll see how

hard it was to build it. How

many little pieces you have to build,

how many data integrations, how many

checks, how fine-tuned the prompts need

to be, how you have to pick your models

super well. And when you do that, you're

going to find that you built something

that nobody else can build because you

spend like two years just doing nothing

but that. So, I'm not scared. Don't be

scared. All right. Thank you, everybody.

Loading...

Loading video analysis...