No Priors Ep. 113 | With OpenAI's Eric Mitchell and Brandon McKinzie
By No Priors: AI, Machine Learning, Tech, & Startups
Summary
## Key takeaways - **O3's Reasoning Focus**: O3 is the latest in OpenAI's line of models focused on thinking carefully before responding, making them smarter than models that don't think first, similar to how humans are more accurate when they pause to think. It's not only better at math problems and factual questions but also uses tools like web browsing and code execution to enhance its usefulness for multi-step tasks. [00:29], [01:06] - **Reinforcement Learning Key**: The biggest difference in building O3 is reinforcement learning, shifting from predicting next tokens in pre-training to solving difficult tasks by taking as long as needed. This enables focused goal achievement, unlike standard foundation models. [03:05], [05:29] - **Tool Use Boosts Scaling**: Tool use is crucial for continuing test time scaling, as without tools the model rants irrelevantly, but with tools like image manipulation or code execution, it productively uses tokens and achieves steeper accuracy gains over time. For visual tasks, models recognize their limitations and use tools to crop or adjust, improving outcomes noticeably. [09:02], [10:27] - **Model Unification Trend**: Unification of models is a priority to avoid confusing users with multiple choices in ChatGPT; instead, embed the decision inside the model for intuitive experiences where it thinks only as long as needed based on its uncertainty. This aims for precise timing, giving answers immediately if known or taking a day if required, without bifurcation into separate fast and slow models. [05:24], [07:39] - **Deep Research Application**: Deep research leverages O3's tool use for tasks like web browsing to synthesize up-to-date information, chart data, and run analyses, acting as an AI research analyst that infers steps for high-level requests like company due diligence. Browsing serves as a natural test bed for RL, enabling longer-horizon behaviors that feel reliable and useful. [11:05], [13:17] - **Simulating AI-Human Interactions**: To handle real-world collaboration like software engineering, train models to simulate interactions by having multiple instances cooperate via multi-agent RL, starting with two models interacting as a baseline for human-like teamwork. This addresses high uncertainty from unpredictable humans, who act as expensive tool calls, by first mastering model-to-model dynamics before scaling to real humans. [25:22], [27:13]
Topics Covered
- Will AI models bifurcate into fast and slow thinkers?
- Why does tool use supercharge test-time scaling?
- Can general models absorb specialized robotics training?
- How will RL training spike AI capabilities unevenly?
Full Transcript
[Music] Hi listeners and welcome back to No Priors.
Today I'm speaking with Brandon McKenzie and Eric Mitchell, two of the minds behind OpenAI's 03 model.
03 is the latest in the line of reasoning models from OpenAI. Super powerful with the ability to figure out what tools to use and then use them across multi-step tasks.
We'll talk about how it was made, what's next, and how to reason about reasoning.
Brandon and Eric, welcome to No Priors.
Thanks for having us.
Yeah, thanks for having us. Do you mind walking us through um 03 what's different about it? What what it was in terms of a breakthrough in terms of like you know a focus on reasoning and you're adding memory and other things versus this a core foundation model or LLM and what that is. So 03 is like our most recent model in this Oer line of models that um are focused on thinking carefully before they respond.
And these models are in sort of some vaguely general sense smarter than like models that don't think before they respond.
You know, similarly to humans, um it's easier to be, you know, more accurate if you think before you respond.
I think the thing that is really exciting about 03, um is that not only is it just smarter if you make like an applesto apples comparison to our previous Oer models, you know, it's just better at like giving you correct answers.
of math problems or factual questions about the world or whatever.
Um, this is true and it's great and we, you know, will continue to train models that are smarter. Um, but it's also very cool because it uses a lot of tools that um, uh, that that enhance its ability to do things that are useful for you.
So, yeah, like you can train a model that's really smart, but like if it can't browse the web and get up-to-date information, there's just a limitation on how much useful stuff that model can do for you. if the model can't actually write and execute code um there's just a limitation to um how you know the sorts of things that an LLM can do efficiently whereas like a relatively simple Python program can you know solve a particular problem very easily. So, um, not only is the model, it's on its own smarter than our previous O series models, which is great, but it's also able to use all these tools that like further enhance its abilities and whether that's doing like research on something where you want up-to-date information or you want the model to do some data analysis for you or you want the model to be able to do the data analysis and then kind of review the results and adjust course as it sees fit instead of you having to be so sort of prescriptive about like each step along the way. The model is sort of able to take these like highle requests like do some due diligence on this company and you know maybe run some reasonable like forecasting models on so and so thing and then you know write a summary for me like the model will kind of like infer a reasonable set of actions to do on its own. So it gives you kind of like a higher level interface to to doing some of these more complicated tasks.
That makes sense.
So it sounds like basically there's like a few different changes between your core sort of GPT models where now you have something that takes a pause to think about something.
So at inference time you know there's more compute happening and then also it can do sequential steps because it can kind of infer what are those steps and then go act on them.
How did you build or train this differently from just a core foundation model or you know when you did when you all did uh GPD 2.5 and 4 and all the various models that have come over time? Um what is different in terms of how you actually uh construct one of these? I guess the short answer is reinforcement learning uh is is the biggest one. Um so yeah rather than just having to predict the next token and some large pre-training corpus from you know uh you know everywhere essentially uh now we have a more uh focused goal of the model solving very difficult tasks and taking as long as it needs to do to figure out the the answers to those problems. something that's like kind of magical from a a user experience for me was we've in the past for our reasoning models talked a lot about test time scaling and I think for a lot of problems uh you know without tools test time scaling might occasionally work and but at some point the model's just kind of ranting in it in its internal chain of thought and uh especially for like some visual perception ones it knows that it doesn't a it's not able to see the thing that it needs and it just it just kind of like loses its mind and goes insane And uh I I think tool use is a really important component now to continuing this like test time scaling.
And you can feel this when you're talking to 03. At least my my impression when I first started using it was um the longer it thinks like I really get the impression that like I'm going to get a better result and you can kind of watch it do really intuitive things and uh it it it's it's a very different experience but being able to kind of trust that uh as you're waiting like it's worth the wait and you're going to get a better result because of it and the model's not just off doing some you know totally irrelevant thing.
It's cool. I think in your original um post about this too, you all had a graph which basically showed that you you looked at how long it thought versus the accuracy result and it was a really nice relationship.
So clearly, you know, thinking more deeply about something really matters.
And it it seems like um in the long run, do you think there's just going to be a world where we have um sort of a split or bifurcation between models which are sort of um fast, cheap, efficient, get certain basic tasks done and then there's another model which you upload a legal M&A folder and it takes a day to think and it's slow and expensive but then it produces you know output that would take you a team of people you know a month to produce or how do you think about the world in terms of how how all this is evolving or where it's heading?
You know, I think for us like unification of our models is something that, you know, Sam has talked about uh publicly that, you know, we have this big crazy model switcher in chat GPT and there are a lot of choices and um you know, we have uh a model that might be good at any particular thing, you know, that a user might want to do, but that's not that helpful if it's not easy for the user to figure out, well, which model should I use for that task? Um, and so yeah, making the models better able, you know, making this experience more intuitive is definitely something that is is like valuable and and something we're interested in doing. And that that applies to this, you know, uh, question of like uh, you know, are we going to have like two models that, you know, people pick between or a zillion models that people pick between or do we put that decision, you know, inside the model?
Um I you know I think everyone is gonna try stuff and figure out what works well for like the problems they're interested in and like the users that they they have. But um but yeah, I mean that that question of like how do you you know make that uh that sort of decision be like as you know effective, accurate, like intuitive as possible is is definitely top of mind. Is there a reason from a research perspective to combine reasoning with pre-trainer try to um have more control of this?
Because if you just think about it from the product perspective of like the end consumer dealing with chat GPT like you know we won't get into the the naming nonsense here but they don't care they just want like the right answer and the amount of intelligence required to get there in as little time as possible right the ideal situation is it's like intuitive uh that like how long should you have to wait you should have to wait as long as it takes for the model to like give you a correct answer or and I uh I I hope we can get to a place where our models have a more precise understanding of their own level of uncertainty.
Um because you know if they if they already know the answer, they should just kind of tell you it.
And if uh if it takes them a day to actually figure it out, then they should they should take a day. But you should always have a sense of like uh it takes exactly as long as as it as it needs to for that current like models intelligence.
And I I feel like we're on the the right path for that.
Yeah. I wonder um if there isn't a bifurcation though between like an enduser product and a developer product, right?
because there are lots of companies that use, you know, um the APIs to all of these different models and then for very specific tasks and then on on some of them they might even use like open- source models with really cheap inference with stuff that they control more.
I hope you could just kind of tell the model like hey this is a API use case and uh yeah you really can't be over there thinking for like 10 minutes we got to get an answer to the user.
Uh uh it'd be great if their models kind of get to be more steerable uh like like that as well. Yeah, I think it's just a general steerability question.
Like at the end of the day, if the model's smart, like you should be able to specify like the context of your problem and the model should do the right thing.
Um, there's going to be some like limitations because, you know, maybe uh just figuring out given your situation what is like the right thing to do might require thinking in and of itself to figure out.
So like it's not that you can obviously do this perfectly but um but yeah pushing you know some all the right parts of this into the model uh to make things easier for the user is like seems is a very good goal. Can I go back to something else you said like um so the first guest we ever had on the podcast was actually Nome Brown.
Um so I've heard of him you know two plus years ago.
Yes. Hi. It'd be great to get some intuition from you guys for why tool use helps like test time scaling work much better. I can give maybe very concrete cases for like the the visual reasoning side of things. The uh there's a lot of cases where and back to also the model being able to estimate its own uncertainty.
You'll you'll give it some kind of question about an image and the model will very transparently tell you and it shouldn't have thought like I I I don't know.
I can't really see the thing you're talking about very well.
Or like it almost knows like that its vision is not very good. And uh but what's kind of magical is like when you give it access to a tool, it's like, okay, well, I got to figure something out. Uh let's see if I can like manipulate the image or crop around here or something like this.
And um what that means is that it's it's it's like much more productive use of tokens as it's doing that. And so your test time scaling slope, you know, goes from something like this to, you know, something much deeper. And uh we've seen exactly that like the the the test time scaling slopes for without tool use and with tool use for for visual reasoning specifically are very noticeably different.
Yeah. Yeah, I also say like for like writing code for something like um there are a lot of things that an LLM could try to figure out on its own but would require a lot of uh attempts and self-verification that you could write a very simple program to do in like a verifiable and and you know much faster way.
So um you know I hey do some research on this company and like use this type of you know valuation model to tell me like you know what the valuation should be like you could have the model like try to crank through that and like fit those coefficients or whatever in its context or you could literally just have it like write the code to just do it the right way um and just know what the actual answer is. And so um yeah, I think like part of this is you can just allocate compute a lot more efficiently because you can defer stuff that the model doesn't have comparative advantage to doing to a tool that is like really well suited to doing that thing.
One of the ways I've been using um some form of 03 a lot is deep research, right?
I think um that's basically a research analyst um AI that you all have built that basically uh we'll go out, we'll look up things on the web, we'll synthesize information, we'll chart things for you. It's it's pretty amazing in terms of its capability set.
Did you have to do anything special in terms of um you know any form of specific reinforcement learning specifically for it to be better at that or other things that you built against it or how did you think about the data training for it the data that was used for training it?
Like I'm just sort of curious like how that product if at all is a branch off of off of this and how you thought about building that specifically as part of this broader effort. I think when we uh think about like tool use, I think browsing is one of the most like natural places where you know you you think of as a starting point of like okay like and and it's it's not always easy.
I mean like the you know initial kind of uh browsing that we uh included in GPT4 a few years back like it was hard to make it you know work in a way that felt like reliable and like useful.
Um but you know in the sort of you know modern these days last year you know uh two years ago is ancient history um I think it feels like a natural place to start because it's like so widely applicable to so many types of queries like anything that is you know requires up-to-date information like it should help to browse for and so um in terms of like a test bed for hey like does you know the way we're doing RL Well, like does it really work?
You know, can we really get the model to learn like uh longer time horizon kind of meaningful extended behaviors?
Like it it feels like kind of a natural place to start in some ways in that it you know also is fairly likely to be like useful in in a relatively short amount of time. So it's like yeah, let's try that. Um, I mean, you know, in RL, like at the end of the day, you're defining an objective and, uh, if you have an idea for like who is going to find this most useful, like, you know, you you might like want to tailor your objective, you know, to who you expect to be using the thing, what you expect they're going to want, you know, what is their tolerance for? Do they want to sit through a 30 minute roll out of deep research?
you know, do they when they ask for a report, you know, do they want a page or five pages or a gazillion pages?
So, um yeah, I mean, you're you're definitely, you know, you want to tailor things to like who you think is going to be using it. I feel like there's a lot of almost like white collar behavioral work that um you or knowledge work that you all are really capturing through this sort of tooling going forward.
And you mentioned software engineering is one potential area.
um deep research and sort of an analytical jobs is another where there's all sorts of really interesting work to be done that's super helpful in terms of augmenting what people are doing are there two or three other areas that you think are the most near-term interesting applications for this whether OpenAI is doing it or others should do it uh aside I'm just sort of curious how you think about the big application areas for this sort of technology I guess my uh you know very biased one that I'm excited about is is coding and also uh research in general being able to like improve upon on the velocity that we can do research at OpenAI and others can do research when they're using our tools.
I think our models are getting a lot a lot better very quickly at at being actually useful and it seems like they're kind of reaching some kind of inflection point where uh they they they are useful enough to uh want to reach out to and and use like multiple times a day for me for me at least which wasn't the case.
they're always like a little bit, you know, behind what I wanted them to be, especially when it comes to like navigating and using our internal codebase, which is not simple.
Uh, and uh, it's amazing to see like more recent our our models actually really spending a lot of time trying to understand the questions that we ask them and uh, coming back with things that uh, save me yeah like many hours of of my own time.
People say that's the fastest potential bootstrap, right?
in terms of each model subsequently happening helping to make the next model better, faster, cheaper, etc. And so people often argue that that's almost like a inflection point on the exponent towards super intelligence is basically this um ability to use uh AI to build the next version of AI.
Yeah. And there's so many like different components of research too.
There's it's it's not just uh you know sitting off in the ivory tower thinking about things but there's there's like hardware uh there's uh you know various components of training and evaluation and stuff like this and each of these can be turned as some kind of task that can be optimized and iterated over.
So there's plenty of uh you know room to to squeeze out improvements.
We talked about browsing the web, writing code, arguably the greatest tool of all, right?
Especially if you're trying to figure out how to spend your compute, you can write more efficient code.
um generating images, writing text.
There are certainly like trajectories of action I think are not in there yet, right?
Like reliably using a sequence of business software.
I'm really excited about the computer use stuff. Uh it kind of drives me crazy in some sense that our models are not already just like on my computer all day uh watching what I'm doing.
And well, I I know that could be creepy for some people and like I think you should be able to like opt out of that or have that opted out by default. I hate typing also.
Uh I I I wish that I could just kind of like be working on something on my computer.
I hit some issue and I'm just like, you know, all right, what am I supposed to do with this and I can just kind of ask I think there's tons of space for yeah being able to improve on like how we interact with the models and uh this goes back to them being able to use tools in a more intuitive way.
I guess using tools closer to how we use them.
Um it's also surprising to me how intuitively our models do use the tools we give them access to.
It's like weirdly humanlike, but I guess that's not too surprising given the data they've seen before. But yeah, I think a lot of things are weirdly humanlike.
Like my intuition for like well why is tool use so impactful to test time scale like why is the combination so much better take any any role you can make a decision when you are trying to make progress against a task as to like do I get external validation or do I sit and think really hard right and usually you want to do like one is more efficient than the other and it's not always just sit in a vacuum and think really hard with what you know yeah absolutely yeah you can seek out sort of new inputs like it doesn't have to be this closed system anymore and I do feel like the the closed systemness of the models is still sort of a limitation in some ways.
Like you're not you're not necessarily like turning this I mean like I think it'd be great if the model could control my computer for sure but in some sense it's there's a reason we don't go hog wild and say like oh yes here's like the keys to the kingdom like have at it.
Um there are still you know asymmetric costs to like the time you can save and the types of errors you can make. And so we're trying to like iteratively kind of you know deploy these things and like try them out and figure out like where are they reliable you know and where are they not um because yeah like if you did just let the model control your computer it could do some cool stuff like I have no doubt but you know do I trust it to like respond to all of the you know random emails that Brandon sends me actually maybe for that task it doesn't require that much intelligence but you more generally like do I you know do I trust it to to do everything I'm I'm doing like you know some things and I'm sure like that set of things will be bigger tomorrow than it was yesterday.
Um but yeah I think part of this is like we limit the affordances and keep it a little bit in the like sandbox just out of caution um so that you know you don't send some crazy email to your boss or um you know delete all your texts or delete your hard drive or something.
Is there some sort of like um organizing mental model for like the tasks that one can do with uh you know increasing intelligence test time scaling and tool improve tool use right because I I look at this and I'm like okay well you have complexity of task and you have time scale um then you have like the ability to come up with these RL rewards and environments right then you have like usefulness um uh you maybe you have some you of you have some intuition about like diversity and generalization across the different things you can be doing but it's a it seems like a very large space and scaling RL like new gen RL is not it's just not obvious like how to me it's not obvious how you do it or how you choose the path is there some sort of organizing framework that you know you guys have that you can share I mean I I don't know if there's like one organizing framework I think there are a few like factors at least that I think about in like the very very grand scheme of things is like how much like in order to solve this task like how much uncertainty with the environment do I have to like wrestle with like um for some things where it's like this is a purely fact like who was the first president of the United States like there's zero like environment I need to interact with to like reach the answer to this question correctly I just need to remember the answer and say the answer you know if I want you to like write some code you know that like solves some problem well now I have to deal with a little bit of like not purely internal model stuff, but also like okay, I need to execute the code and like that code execution environment is maybe more complicated than my model can memorize internally. So I have to do like a little bit of like writing code and then executing it and making sure it does what I thought it did and then testing it and then giving it to the user and things get like the the amount of that sort of stuff outside the model that you have to like you know you can't just recall the answer and give it to the user.
you have to like test something and you know run an experiment in the world and then wait for the result of that experiment. Like the more you have to do that the more uncertain the results of those experiments like in some sense that's like one of the core like attributes of like what makes the tasks hard.
Um and I think another is like how you know simulatable they are.
um like stuff that is really bottlenecked by like time like the physical world um is also you know just just harder than stuff that we can simulate really well you know it's not a it's not a coincidence that you know so many people are interested in coding and you know coding agents and things um and that like you know robotics is hard and it you know it's it's slower and you know I used to work on robotics and like it's frustrating in a lot of ways I think both this like how much of the external environment do you have to deal with and then like how much do you have to wrestle with the unavoidable slowness of the real world are two like dimensions that I I sort of think about.
It's super interesting because if you look at um historically some of these models, one of the things that I think has continued to be really impressive is the degree to which they're generalizable.
And so I think when GitHub copilot launched it was on codeex which was like a specialized code code model and then eventually that just got subsumed into these more general purpose models in terms of what a lot of people are actually using for um coding related applications.
How do you think about that in the context of things like robotics?
So do you you know there's like probably a dozen different robotics foundation model companies now.
Do you think that eventually just merges into the work you're doing in terms of there's just these big general purpose models that can do all sorts of things or do you think there's a lot of room for these standalone other types of models over time? I will say the one thing that's always strike me as kind of funny about us doing RL is that we don't yet do it on the most like canonical RL task of robotics. Um and I I I personally don't see any reason why we couldn't have this these be this the same model.
Um I think there there are certain challenges with like I don't know do you want your um RL model to be able to like generate an hourong movie for you uh natively as opposed to like a tool call like that's where it's probably tricky to I mean have you have more conflict between having like everything in the same set of weights but um certainly like the things you see 03 already doing in terms of like uh you know exploring a picture and things like that are are kind of like early signs of uh some something like an agent exploring like an external environment.
So, I don't think it sounds too farfetched to me. Yeah. I mean, I I think the the the thing that came up earlier of the also the like intelligence per cost thing, you know, the the real world is like an interesting litmus test because at the end of the day, like there is a you know, frame rate in the real world you need to live on. And it doesn't matter if you get the right answer after you think for two minutes. Like, you know, the ball is coming at you now and you have to catch it. Uh gravity's not going to wait for you. So you you that's an extra constraint that we get to at least softly ignore when we're talking about these purely disembodied things.
That's kind of it's kind of interesting though because really small brains are very good at that you know so you look at a frog you start looking at different organisms and you look at sort of relative compute.
Yeah. And you know very simple systems are very good at that.
ants, you know, like um so I I think that's kind of a fascinating question in terms of what's the baseline uh amount of capability that's actually needed for some of these real world tasks that are reasonably responsive in nature.
It's really tricky with uh with vision too there. We have so our models have some I think maybe famous like edge cases of where they don't do the right thing.
Uh I think Eric probably knows where I'm going with this. I don't know if you ever asked like our models tell you what time it is on a clock.
uh they they really like the the time 10 10.
Uh so yeah, it's my favorite time, too.
So that's that's usually what I tell people.
It's like over 90% or something like that of all clocks on the internet are 1010.
Uh and it's because it looks like I guess like a happy face and it looks like nice and but but anyways, like the what I'm getting at is like our our visual system was developed by interacting with, you know, the external world and uh h having to be good at like navigating things, you know, avoiding predators.
Um, and our models have learned vision in a very different type of way.
And, uh, I I I think it'll we'll see like a lot of really interesting things if we can get them to be kind of closing the loop by, you know, reducing their uncertainty by taking actions in the real world. Um, just as opposed to like thinking about stuff. Yeah.
Hey, Eric, you brought up um the idea of like how uh what in the environment can be simulated, right, as a as an input as to like how difficult will it be to improve on this?
Um uh as you get to longrunning tasks like let's just take software engineering like there is a lot of interaction that is not just me committing code continually.
It's like I'm going to talk to other people about the project in which case you then need to deal with the problem of like can you reasonably simulate how other people are going to interact with you on the project in an environment that seems really tricky right I'm not saying that you know 03 or whatever set of uh foundation models now doesn't have the intelligence to respond reasonably but like how do you think about that simulation being true to life as it uh true to life true to the real world as you uh involve human beings in an environment in in theory. My spicy I guess take on on that is like I don't know the spicy but uh 03 in some sense is already kind of simulating what it'd be like for a single person to do something with like a browser or something like that and I don't know train two of them uh together uh so that you'd have you know you have two people interacting with each other um and yeah there's no reason you can't scale all this up so that models are are trained to be really good at cooperating with each other I mean there's a lot of already existing literature on multi- aent RL and uh yeah, if if if you want the model to be good at something like collaborating with a bunch of people, like maybe a not too bad starting point is making it good with collaborating with other models. Man, someone should do that.
Yeah. Yeah. Yeah.
We should really start thinking about that.
Eric, I think it is a I think it's a little bit spicy because yes, the work is going on.
It is interesting to hear you think that is a useful direction. Uh, I think lots of people would still like to believe, not me, like my comment was extra good on this poll request or whatever it is, right? Um, and okay, I could I could sympathize with that.
Sometimes I see our models training and I'm like, uh, what are you doing?
You know, like, uh, you're taking forever to figure this out. And I actually think it would be really fun if you could actually train models in an interactive way.
Uh, you know, forget about just like a test time, but I think it'd be really neat to train them to do something like that. Be able to like intervene uh when it makes sense.
Yeah, just more more me being able to tell the model to you cut it out uh and like in the middle of its kind of chain of thought and uh it being able to learn from that on the fly I think would be great.
Yeah, I do think this is like the intersection of these two things where it's both uh like an a point of contact with the external environment that is like can be very high uncertainty like humans can be very unpredictable um in some cases and it's sort of limited by the tick of time in the real world if you want to like you know deal with actual humans like humans have a fixed you know clock cycle um uh you know in their in their head um so Yeah, I mean this is if you you know if you want to like do this in the literal sense it's hard and so you know scaling it up and and you know making it work well is is you know it's not obvious how to do this. Yeah, we are a super expensive tool call. You know if you're a model you can either ask me you know meat bag over here to uh you know help with something and I'll try to think really slowly. In the meantime it could have like used browser and read like a hundred papers on the topic and something like that. So it's yeah how do you model the the trade-off there?
But the human part is important. I mean, I think in any research project, like my interactions with Brandon are the hardest part of the project, you know, like writing the code is that's the easy part.
Well, and there's there's some analog from um self-driving.
Uh L's going to say that, you know, hanging out with me every week is the hardest part of doing this podcast, but it's my favorite part.
Look at how healthy their relationship is.
Eric, we need to learn from this.
No, we're honest. It's okay.
We got to work through it.
In self-driving, one of the like classically hard things to do was like predict the human and the child and the dog like agents in the environment versus um uh like what the environment was.
Um and and so uh I I think there's like some analogy to be drawn there.
Um, going back to just like how you progress the O series of models from here, is it uh is it a reasonable like assessment that some people have uh that the capabilities of the models are likely to advance in a spikier way because you're relying to some degree more on the creativity of research teams and like making these environments and deciding, you know, how to create these um evals versus like we're scaling up on existing data set in pre-training. Is that a fair contrast?
Spiky or like what's the plot here?
What's the like the x axis and the y domain is the x axis and y is capability.
Yes. Because you you're like choosing what domains you are really creating this RL loop in. I mean I think this is a very reasonable uh hypothesis to um to hold. I think there is some like counter evidence that I think should you know be factored into people's intuitions like you know Sam tweeted an example of some creative writing from one of our models that um I think was I I'm not an expert and I'm not going to say this is like you know publishable or like groundbreaking but um I think it probably updated some people's intuitions on like what you know you can train a model to do really well and so I think there is some structural reasons why you'll have some spikiness just because like as an organization you have to decide like ah we're going to prioritize you know xyz stuff and like as the models get better the surface area of stuff you could do with them grows faster than you know you can potentially like say hey this is the niche you know we're going to carve out we're going to try to do this really well so like there I think there's some reason for spikiness But I think some people will probably go too far with this and saying like, "Oh yes, these models will only be really good at math and code and like not you know like everything else is like you can't get better at them." And I I think that is probably not the right intuition to have.
Yeah. And I think probably all like uh major AI labs right now have some partitioning between let's just define a bunch of data distributions we want our models to be good at and then just like throw data at them and then another set of people in this same companies is pro are probably thinking about how can you kind of lift all boats uh at once with some like algorithmic change and uh I I think yeah we definitely have uh both of those types of efforts at at OpenAI and Um I think especially the on the data side like there are going to naturally be things that we have a lot more data of than than others and uh but ideally yeah we we have plenty of efforts that will not be so reliant on the exact like subset of data we did RL on and it'll generalize better.
I get pitched every week and I bet a lot does too.
Uh a company that wants to generate data for the labs in some way and um or it's you know access to human experts or whatever it is but like you know there's there's infinite variations of this. Um uh if you could wave a magic wand and have like a perfect set of data like what would it be that you know would advance model quality today? This is a dodge, but like uncontaminated evals um always super valuable and that's data.
Um and I mean yeah like you want you know good data to train on and that's of course valuable for making the model better but I think it is often neglected how also important it is to have high quality data which is like a different definition of high quality when it comes to an eval. Um, but yeah, the eval side is like often just as important because you don't you need to measure stuff and like as you know from you know trying to hire people or whatever like evaluating the capabilities of like a general like capable agent is really hard to do in like a rigorous you know way. So yeah I think eval are a little under appreciated but it's true. evals are I mean especially with some of our recent models where we've kind of run out of reliable eval to track because they kind of just solved a few of those. Um but on the on the training side I think it's always valuable to have uh training data that is kind of at the next frontier of model capabilities.
I mean, I think a lot of the things that 03 and 04 mini you could already can already do those types of tasks like basic tool use, we probably aren't uh you know, super in the need for for new data like like that.
But I think it'd be hard to say no to a data set that's like bunch of like multi-turn user interactions and some code base that's like a million lines of code that you know is like a two week research task of like adding some new feature to it that requires like multiple poll requests. I mean that I mean like something that was like super high quality and has a ton of supervision signals uh for us to learn from.
Yeah, that I think that would be awesome to have. You know, I definitely wouldn't turn that down. You play with the models all the time. I assume a lot more than average humans do. What do you do with reasoning models that you think other people don't do enough of yet?
Send the same prompt many many many times to the model and and get an intuition for the distribution of responses you can get. I have seen it drives me absolutely mad when people do these comparisons on Twitter or wherever and they're like oh I put the same prompt into blah blah and blah blah and this one was so much better cuz it's like dude you like like I mean was something we talked about a bit when we were launching is like yeah 03 can do really cool things like when it chains together a lot of tool calls and then like sometimes for the same prompt it won't have that, you know, moment of magic or it will, you know, just take a little it'll do a little less work for you. And so, yeah, though, like the peak performance is really impressive, but there is a distribution of behavior. And I think people often don't appreciate that there is this distribution of outcomes when you put the same prompt in and getting intuition about that is useful.
So, as an enduser, I do this and I also have a feature request for your friends in the product org.
Um, I'll ask, you know, Oliver or something, but it's just I want a button where like assuming my rate limits or whatever support it.
I want to run the prompt automatically like 100 times every time, even if it's really expensive.
And then I want the model to rank them and just give me the top one and two interesting and just let it be expensive. Or synthesis across it, right?
You could also synthesize the output and just see if there's some although maybe you're then reverting to the mean in some sense relative to that distribution or something. But it seems kind of interesting. Yeah, maybe there's a good infrastructure reason you guys aren't giving us that button.
Well, it's expensive, but there are I think it's a great suggestion.
Yeah. Yeah, I think it's a great suggestion. How much would you pay for that? A lot. But I'm I'm a price insensitive user of AI.
Yeah, I see. Perfect.
Should have Sarah tier as one of your tiers.
Exactly. Exactly. Yeah. I I I really like sending prompts to our models that are kind of at the edge of what I expect them to be able to to do just kind of for funsies. Uh like a lot of the times before I'm about to do some like programming tasks, I will just kind of ask the model to go see if it can figure it out. Uh a lot of times like no hope of it being able to do it.
And indeed sometimes it comes back and I just am pretty like I'm like a disappointed father.
But uh other times it it it does it and it's amazing and it saves me like tons of time. So I I I kind of use our model as almost like a background cue of work uh where I just I'll just like shoot off tasks to them and sometimes those will those will stick and sometimes they won't.
But in either case like it's always a good outcome if something good happens.
That's cool. Yeah. I do that just to feel better about myself when it doesn't work.
I get depressed. I'm still providing value.
Exactly. When it works I feel even worse about myself. So, it's very uh hit or miss. Yeah. This has been great, guys.
Thank you. Yeah.
Thanks so much for coming along. Yeah. Thanks.
It was fun. Thanks for having us.
Find us on Twitter at no prior pod.
Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priers.com.
Loading video analysis...