How to Build Viral Products: Lessons from Kevin Weil, VP, OpenAI for Science
By DECODE
Summary
Topics Covered
- Start Small, Win Deeply
- Define the Problem Before the Product
- Everyone Can Now Build
- LLMs Are Already Pushing the Science Frontier
- Build Where AI Barely Works
Full Transcript
Welcome to campus, Kevin. It's so good to have you here.
Thank you for having me.
Of course. really excited to chat more not just about Bali but also more broadly product building and how to more effectively build products with AI as I was just sharing with you before we walked on stage we have a room full of
majority founders so um just to give I think everybody knows who you are but if you could do give us like a one minute intro of how you ended up here because you've been a part of so many incredible products throughout your career.
Yeah. Okay, I'll do the fast one. Um I
was doing my PhD in physics at Stanford.
Thought I was going to be a physics professor for the rest of my life. met
my wife who was a Mayfield fellow. They
think there's a Mayfield program here at Berkeley. Basically, she'd like gone
Berkeley. Basically, she'd like gone through college working at startups. I
had no idea that that even existed, but she kind of opened my eyes to it. And so
I I was like, "Oh man, that that sounds way more interesting." You know, I mean, I can I can write some code and ship something to a million people tomorrow instead of, you know, maybe making a contribution to physics over 40 years.
So, I started working at startups. Uh, a
couple you've probably never heard of because they failed. And then I went to Twitter when it was about 40 people. had
no idea what I was doing but joined as an engineer uh and ended up as head of product after like seven years uh was head of product at Instagram was one of the co-creators of Libra if people remember Libra the crypto project uh
also didn't work but there are good stories uh and then was president at planet uh planet labs building satellites and imaging the whole world every day and then joined open AI as
chief product officer about 18 months ago and just moved to a new role focusing on accelerating science like using AI to accelerate science all around the world because I think if we uh if we can do that if we can do the
next 25 years of scientific discovery in five instead using AI maybe we'll have an even bigger impact than you know building chat GBT so uh that's the nutshell and I guess I'm since most of
you are entrepreneurs uh my wife runs a venture fund called Scribble Ventures and I'm very lucky to get to kind of work nights and weekends uh with her it's a preede seed fund and I I've always enjoyed getting to do a little
bit of investing thing and a little and and a lot of operating because I always find I I learn so much from the entrepreneurs that I get to work with and you know hopefully can help them not fall in some of the potholes that I've
fallen in over the course of my career.
I love that. I literally right before stage I was just sharing with Kevin that two of the attendees last year met each other here became co-founders and just raised funding for your annual house venture fund.
Super cool.
So small world. Um, speaking of avoiding potholes in the sticks, what are some of the top stakes mistakes that founders can avoid in trying to build an incredible product?
Oh, that's a good question. I think one of the biggest ones, especially in the early days, is trying to build a product for everybody. Paul Graham says this
for everybody. Paul Graham says this well, but I think it's really true. You
you'd rather have a product that a thousand people absolutely love and can't live without than a product that 10 million people are kind of like, "Yeah, I guess that's better." Because
you have to remember that most people don't live their days thinking about your product and they already have habits and habits are hard to break. And
so you need to have that product that they're just like, "Oh my god, if this thing went away, I would be so sad."
Even if that's a small number of people, because you'll learn a ton from that small group. And then you can expand
small group. And then you can expand from there. Whereas if lots of people
from there. Whereas if lots of people like you just a little bit, you get a lot of people trying, none of them really stick around and you don't have a product that works. So like find that thing like it's better to start small
and get your foot in the door with something that people really love. And I
think people make the mistake of trying to like you know Facebook didn't start out going after the whole world right it started out going after colleges and I think it wouldn't have succeeded if it tried to be everything to everybody
early on. And speaking of being really
early on. And speaking of being really focused with that early group, um, are there principles that you go by in looking at any product and figuring out which user segment to prioritize and how
to actually build one that people really love?
I think I don't know if there's a there's obviously not a rule because different products go after different segments. It's really like what problems
segments. It's really like what problems are you trying to solve? And uh that's that's another thing I learned this uh over and over again from Kevin Cyester in particular at Instagram is like if
you can if you can really tightly articulate the problem that you're trying to solve then a lot of these other things you know exactly how the product looks and who you're building for you know what's your ICP or what
customer segment are you looking for a lot of that actually kind of becomes obvious if you really tightly define the problem you're going to solve. If your
problem is really broad, like, oh, we're gonna make it easier for people to share content. Well, that doesn't help you at
content. Well, that doesn't help you at all because you haven't really that's, you know, I don't go around saying like, gosh, I wish somebody would help me share content, right? So, like, what
what user problem are you trying to solve? Make it as as uh
solve? Make it as as uh as like precise as possible. make sure
you understand why that's a problem for who that's a problem and then a lot of the stuff actually kind of falls out the other side. Um, but it takes work to,
other side. Um, but it takes work to, you know, it takes user research and other things to to really understand the problem you're solving.
I love how you framed it because verality actually starts really small that you can get one thing really really right. And speaking of how to iterate on
right. And speaking of how to iterate on that cycle faster, we were just talking about now we have I mean all the founders in the room have access to different kind of AI tools to help them do that faster. What are the best
practices because it's treating so fat species. What are some of the best
species. What are some of the best practices you see firsthand uh that founders can adopt?
Well, I think it's really cool, right?
When you think think back 12 months ago, so go back to November of 24.
Most people that were writing code were writing code themselves and just like we were kind of doing things the way that we always have done things. You go 12 months to today and it's completely
different. I would imagine that most of
different. I would imagine that most of you, most of your companies are using cloud code, codecs, cursor, you know, one of those for like 90% of what you do. And if you're not, you probably
do. And if you're not, you probably should be. It's amazing how much changes
should be. It's amazing how much changes in 12 months with AI. And the cool thing about that is it's not just your engineers that can build stuff.
There's there's very little reason for anybody to make like a PowerPoint deck anymore, uh, or Google Slides or whatever. Like product docs are very
whatever. Like product docs are very different now. Nobody should be writing
different now. Nobody should be writing a PRD. You should be building stuff.
a PRD. You should be building stuff.
Whether you're a product manager, a designer, I mean, our finance team at OpenAI builds tools. They vibe code them with codecs and like solve their own problems where in a previous world you'd
be like, "Oh, can I just get an engineer?" And of course, nobody's
engineer?" And of course, nobody's prioritizing the work that finance does because that's just how it works. It's
not the, you know, it's usually not your your sort of do or die thing. So it's
hard for them to find engineers and they end up with these super manual workflows that never change because they don't have the resources to build this stuff.
But guess what? Now they can. And that's
amazing. So everybody at your company can now be building stuff. And maybe
they're not all building, you know, production code, but they're building tools and they're solving their own problems. What a cool world that is. And
like it was 12 months ago that the world was completely different. And I think that's a fun like it's just a really cool we're just in a really cool part of the cycle because I guarantee you in 12
months the world is going to be completely different in some other way.
I have some guesses but I could be completely wrong. But that just means if
completely wrong. But that just means if you're an entrepreneur every single product that we use whether it's hardware software like any of the big billion user uh software products
the phones that we all have like everything will be reinvented over the next five years. because everything that has huge scale today was built pre AI and post AI everything can be totally
different. So you know are some of them
different. So you know are some of them going to reinvent themselves some of the big incumbents maybe they could but history would suggest that most of them won't and that it's actually a time for
disruption and that's a massive opportunity for every single one of you in the audience and I think that's really exciting. I know you mentioned
really exciting. I know you mentioned that you have some guesses for what the future might look like. Love to hear your thoughts on what the areas that you're most excited about are.
Well, I mean, I switched from uh from focusing on product and thinking about like Chad GPT and our other products all day, which was the coolest product goal I've ever had to working on science. So,
you know, that gives you some sense. I I
think there's a real possibility that 2026 is the year that the way that we do science completely changes and that when we're sitting here in November of 26,
we're like, man, do you remember when you had to like do all of this sort of grunt work in science yourself and AI couldn't even, you know, be a brainstorm
partner, couldn't like we're already seeing chat GPT being used not just to do the kind of things you're used to seeing chat GPT do, but to actually push
the frontier in science. Like you have math professors who are tweeting and we see this like once a day at this point who are tweeting hey I had this idea I was going to give it to my posttock like
I I was pretty sure this theorem was true but I you know I I gave it to my posttock he was busy took me it took him a week he didn't respond so I gave it to Shhat GPT and it just solved it in 10
minutes and it's it's not you know we're not proving the remon hypothesis yet I'm not trying to sort of get out too far ahead head, but it's not even necessarily like a
full theorem. It's maybe a lema, but
full theorem. It's maybe a lema, but still this is this is chat GPT.
This is an LLM, an AI that we all built, moving beyond the frontier of what humans know. It's not yet better than
humans know. It's not yet better than what humans could do. In most of these cases, the math professors are like, "Yeah, if I put like another, you know, few hours into it, I probably could have proved that." But Chad GBT did it in 10
proved that." But Chad GBT did it in 10 minutes. And that's with today's models.
minutes. And that's with today's models.
And you have to remember, if there's one thing you remember walking out of here at the end of today, the model that you're using today is the worst model that you will ever use for the rest of your life. They're only getting smarter
your life. They're only getting smarter and they're getting smarter very quickly. Um, and so if a model can prove
quickly. Um, and so if a model can prove small novel things today, can help with biological research, can help with material science, just imagine where we're going to be in 3 months, in 6
months, in 12 months, let alone three or four years.
In the interest of time, I want to ask you one last question, then open it up to the audience. Um, I if you had to leave one very tactical piece of advice for founders in the audience for
something that they can bring back and build as a new habit, what is something you would recommend?
I think you have to be using AI in everything that you do. And if you're not, like you're just you're you're going to get you're going to get beaten
by somebody who is. Um,
and I think that goes for like the product you're building too. Uh, it the the the sort of counterpoint to um it's, you
know, everything is going to be reinvented. Everything that we use is
reinvented. Everything that we use is going to be done differently. Like docs
should not be the same docs in a few years. We should be thinking completely
years. We should be thinking completely differently and the product probably looks and feels completely different.
the, you know, contraositive or whatever to that says if you're building something that kind of looks like the way we do things today, then you're probably not building it right. Uh
there's probably a different way. And be
comfortable building in a place where AI is like only barely working. Um
sometimes it's a little nerve-wracking when you're building something and the AI is just like only kind of good enough, maybe not really good enough.
But if there's one thing I've learned over the last like year and a half at OpenAI, you go through this phase where AI just can't do something right. Name
your thing, it's there was some point where AI couldn't do it and then you get these glimmers where it's just starting to be able to do it. And like 10 20 30% of the time it gets it right and it's
kind of annoying, but when you hit 10 or 20 or 30%, very quickly you go to like 90%. And then very quickly you go to
90%. And then very quickly you go to like, oh, of course I use AI for this and I will never not use AI for this for the rest of my life. So that that that
like sort of existence proof of like it kind of works, you go very quickly to it really works. And so I think the that
really works. And so I think the that area where it kind of works is a great place for entrepreneurs to be building because if it kind of works now, it's really going to work in 6 or 12 months.
And if you're the first one to realize that, then you're kind of riding the the frontier, which is where you want to be as an entrepreneur.
I love that you share that because literally in the earlier session, Barry and I were just talking about the importance of fractional founders who are, you know, in a strategic full-time job come across a painoint and build a part-time project to solve that. And to
some extent, what you're describing is everybody needs to be fractional in AI because it's going to completely change the way all of us do our work. So
everybody needs to kind of pick that up on this idea regardless of the space they're building in.
Um I want to make sure to open up to audience questions as well. Um so Milo, I'd love to get your help at passing the mic over.
Well, uh do you see any significant risks associated with the development of AI?
Risks with the development of AI? Uh I
mean, yeah, sure. Right. It's it's
there's risks with any new technology.
There's probably even more risks with AI because it's so powerful. The the thing that I think is different with um with where we are today, having seen the inside of Twitter from the very earliest
days and if you go back to 2009 for folks who remember uh Twitter was being used like in you know during the Arab Spring uh and people were using it to communicate when the government had shut
the internet down and all these other things and we we talked about Twitter as the free speech wing of the free speech party and speaking truth to power and we we we celebrated that in a major Okay,
we didn't realize at the time that there was a there was another side to that coin and it all the things that gave users that power also could give, you know, other people, governments, etc.
the power to do bad things with it. We
just didn't real, you know, it was like sort of the first time that we had mass digital media like that. Uh you could, you know, look at any of the the various missteps that that Facebook has had over
the years.
I think we're much more, what's the right word? Wary is maybe not the right
right word? Wary is maybe not the right word, but at least like we sort of know to expect it in a way that I think back in, you know, 2009, 2010, we were all just collectively well-intentioned but
naive. And so when we launch an E new
naive. And so when we launch an E new model, I mean, we have an entire safety team. It's like a large number of people
team. It's like a large number of people that spend all their time doing AI safety research uh and then working on models before we launch them to make sure that we round all the edges that
you can't jailbreak the models that like they don't answer questions they shouldn't answer etc. We work with third parties both to red team our models up front. We work with government uh groups
front. We work with government uh groups who have responsibility who take the models and and try and like break them before we launch them to the public. So,
it's just like a much more robust program than anything that I've ever seen before. Which isn't to say that we
seen before. Which isn't to say that we won't make mistakes. I'm sure we will make mistakes, but it's a system designed to make sure that we make like more small mistakes rather than a big mistake.
Maybe Kevin, you can pick one last question.
Oh gosh, I don't know. Uh, how about right in the middle in the back?
Thank you Kevin so much for sharing. I
have a quick question. and you come from a physics background but work as a chief product manager at OpenAI. So my
question to you is what are the key qualities that you look for from a candidate who could uh grow into a great product lead. Thank you.
product lead. Thank you.
I think I mean nobody goes to school to be a product manager, right? In the same way that nobody goes to school to be an entrepreneur. So I um I think there's a
entrepreneur. So I um I think there's a mix of it helps to be technical, it helps to have uh to have good EQ to be able to put yourself in the shoes of of
users or your team, you know, empathy and so on. Um you're also in a product role most of the time you're working with a team and you you have some
leadership role to play. I won't say you lead the team because I think PMs contribute one way and designers contribute one way and engineers another but you you have some leadership role to
play and nobody reports to you right so good PMs can get people to follow them um and I don't even like it shouldn't be about the PM having the ideas it should be about good ideas coming from
everywhere but the PM has a strong responsibility and making sure that the team ultimately goes in a single direction so you're not just like saying yes to everything you're sourcing ideas
from the team and then sort of collectively saying, "All right, this is the bet we're going to make." And then making sure everybody knows it and is rallied around it and understands why you're doing it and what direction
you're going in. So like that's a that's I don't know how to describe that skill, but it's a very important one. Uh and
then at Open Eye in particular, one of the things that we really look for is is just is agency. Like no one is gonna if you wait around and ask someone like, "Hey, am I allowed to do this?" No one's
gonna like if you're waiting around for someone, everyone else is building and you're not. So, uh, at OpenAI, the trick
you're not. So, uh, at OpenAI, the trick is just like it's the whole you can just do things ethos. Uh, you have an idea, prove it out, use codeex, build it yourself tonight, and then, you know,
prototype it and show it to the team tomorrow. And I think that's one of the
tomorrow. And I think that's one of the coolest things about being a PM in in this age.
So, thanks for offering more time.
Should we take one more or?
Sure. We can try one or two and see how fast I can go.
All right, I'll let you.
Yeah, maybe we'll pick one from this side.
Sure. You
and we'll go. You can finish this up. Hi
there.
Hi. So, um, as a fractional founder, I use AI all the time. It makes me productive. It allows me to compete with
productive. It allows me to compete with people who are able to devote all their time to working. Um to me AI alignment and safety is something I value and want
to work towards. Um how can I as a founder with an AI product work towards that goal or contribute to that?
I mean a lot of it comes down to the labs but I I everybody in any product you build I don't know if you're a B2B or consumer or whatever but you still have a sort of trust and safety
component. People are using your product
component. People are using your product in some way. Can they use it to do things that you don't want them to do, that they shouldn't do, that weren't what you intended them to do? Um,
depending on your product, that's either a big deal or not a big deal. But that's
a big component of of what we do, too.
There's, you know, super high, you know, like fancy AI research, and then there's also very like down and in uh dealing with
fraud and and people trying to get the model to do, you know, things that they shouldn't do. And your product will have
shouldn't do. And your product will have those challenges too. I think some of the really big safety research stuff is going to be with the frontier labs, but then every product has its own kind of
integrity, trust and safety surface, and you got to get that right, especially as you scale.
Hello. Um, yes, sorry. I so I was really curious um being a computer science and engineering m major who's also interested in entrepreneurship especially with you now working with open AI and science and kind of
discussing wanting to be able to like do that 10 to 20% of quickly um developing something that when you're trying to train an AI model there's a lot of times issues with reinforcement learning and
it just memorizing an answer instead of properly actually um thinking about it cognitively and using the neural network similar to a human brain. So, I was just kind of curious with open AI and science
and if for any entrepreneurs out there that are wanting to develop um and see that the AI has potential to do something with like electrical engineering or any science research, what you've been doing at OpenAI or what
you would suggest with that reinforcement learning to ensure the model is properly thinking about it instead of just memorizing or like cheating.
Uh I think a lot of people are still using GPT4 and using some of the non-thinking models. I actually
non-thinking models. I actually basically every single thing I ask GPT I I turn it into thinking mode. Um because
I find I get better answers even if it means I wait five or 10 seconds for it to do a little bit of thinking. Um with
GPT51 it does a much better job of it thinking as much as it needs to. So if
you ask it an easy question it'll give you a pretty quick answer and if you ask it a really hard question it'll you know think for a long time to give you a better answer. We worked very hard to
better answer. We worked very hard to make sure that it doesn't just sort of try and memorize things. uh because you do get far better answers when it thinks. So I don't think that's as much
thinks. So I don't think that's as much of a problem anymore. One of the interesting things though is when you're working on the hardest problems when you're, you know, you're a mathematician
trying to do a proof, you're in biology and working at the frontier. The model
like a human, if you're really at the frontier, if you're asking me the hardest math problems that I can do, sort of by definition, I'm not getting them 100% right. Right? Maybe I'm
getting it 10% of the time. And one of the interesting challenges is if a model's only right 10% of the time and you know you're dedicated, you're trying to get it to solve this problem and you try four times and it doesn't work. That
actually doesn't mean that it can't solve the problem. It can potentially.
Uh you just need to try more. You need
to give it more compute, more thinking time. Uh but that's not easy to see from
time. Uh but that's not easy to see from the outside. Like it's very hard to tell
the outside. Like it's very hard to tell the difference as a chat GPT user between a problem that the model just can't solve yet and a problem that it can solve 5% of the time. And that's a product problem that we're looking to
solve. Uh so that you can enter in
solve. Uh so that you can enter in harder and harder problems. And if the model can do them, it will just like realize that it needs to think more and more and more. Maybe try multiple
independent like multiple parallel agents working at once and ultimately get you an answer. Like basically the conclusion is the model is much better than people even realize.
But on low pass rate problems it's hard to tell the difference. Uh and so we're working on exposing that which will help for some of these really hard problems. Hey, thank you so much.
Yeah, thank you so much for coming in Kevin and uh look forward to the round table discussion next door.
Yeah, thank you so much for having me.
Loading video analysis...