LongCut logo

Enterprise & AI | Kevin Weil, VP, OpenAI for Science & Jeetu Patel

By Cisco

Summary

## Key takeaways - **AI Pace Unprecedented**: I've never seen anything like it. Every month computers can do something that computers have never been able to do in the history of the world before. [01:00], [01:06] - **Abandon Top-Down Roadmaps**: You just can't sit there from on high and say, 'Okay, here's our 12-month road map. Now go do that.' Because you don't know what the world is going to look like in 3 months. [02:18], [02:22] - **Start with Imperfect AI**: Do the things that barely work because you go very quickly from this thing was completely impossible for computers to this thing just kind of works... to three to six months later that thing now works 80% of the time. [03:32], [03:50] - **CodeEx Transforms Workflow**: Before I go to bed, I'm like, 'Okay, what really hard thing can I give CodeEx to do while I sleep?'... I just missed an hour of productivity. [05:02], [04:56] - **AI Solves Open Math Problems**: We've seen a huge number of open mathematical problems that the best mathematicians in the world have tried to solve for years and haven't. We've seen a number of those fall to AI-driven solutions mostly by GPT 5.2. [09:00], [09:13] - **AI as Hypothesis Metal Detector**: AI is for me? It's a metal detector for hypothesis. It brings together disparate information... land at the best possible... Don't waste your time. [10:34], [10:40]

Topics Covered

  • AI Pace Demands Constant Coding
  • Bottoms-Up Building Trumps Roadmaps
  • Embrace Imperfect AI Early
  • AI Solves Open Science Problems
  • High Agency Wins AI Era

Full Transcript

It's actually impressive the progress that I think all of you folks are making. And it's it's almost impossible

making. And it's it's almost impossible to predict who's going to come out ahead. And so for customers, in fact,

ahead. And so for customers, in fact, that does tend to be a little bit of a confusing thing because like you know, one week one one model's doing better, the other week someone else does better.

So let's start with that. like let's

start with um I I want to I want to talk a little bit about your product role at OpenAI for just a little bit but then I want to really shift into what's happening in science with um with

OpenAI. Starting on the product side, um

OpenAI. Starting on the product side, um you know, you've been a CPO of iconic companies Twitter Instagram

um you know, Planet, and then, uh what you're doing, what you did at OpenAI, contrast what you saw at OpenAI compared to all the other massive hyper cycles that you saw.

>> I think the the biggest change is undoubtedly just how fast the world is moving right now, right?

>> I've never seen anything like it. every

month computers can do something that computers have never been able to do in the history of the world before. And

that wasn't true when I was at, you know, Twitter in 2009, Instagram in 2015, 2016. Uh when you're building a a

2015, 2016. Uh when you're building a a normal product, you're building on technology that we all know, databases, etc. Databases get a little bit better

every year, but they're basically the same thing from uh from one year to the next. the tools that you use you kind of

next. the tools that you use you kind of know and and you're building on top of them. Now it's like the underlying

them. Now it's like the underlying technology changes and suddenly you have superpowers you know next quarter that you couldn't have imagined that you had today and it it totally changes how you

work. I mean on the one hand everything

work. I mean on the one hand everything is going really fast which is exciting.

Um, I feel like if you're not spending your time writing code, if you're not very quickly translating an idea you have or you know, you hear about a bug,

if you're not turning that into uh uh like a codeex task and just, you know, flipping around and fixing it yourself, you know, why not?

>> Uh, and that's super empowering.

>> It also, I think, leads us to be much more bottoms up in the way that we build >> that innovation. Um because you just can't sit there from on high and say,

"Okay, here's our 12-month road map. Now

go do that." Because you don't know what the world is going to look like in 3 months. And and so of course it's still

months. And and so of course it's still important to have strategy and to have direction and for people to understand what you're trying to accomplish and why. But then I the world today I think

why. But then I the world today I think is much more uh built for people to just like be high agency, take matters into their own hands, understand the direction you're trying to go, but then

uh you know you you you push responsibility down and your teams move really fast. Well, you know, the one

really fast. Well, you know, the one thing that this this audience would probably like and the people that are watching online that are of the same ilk is I think all of us in the community

that are building technology for AI, I think they would like us to not just move really fast but figure out a way to make sure that the absorption rate is higher

>> on the companies as any ideas on that on what can we do to go out and accelerate absorption rate because right now the absorption rate is still pretty slow.

Yeah, I I think the number one thing is just to get everybody experimenting with it. You know, you you can't you can't

it. You know, you you can't you can't stand apart from AI and expect to keep up because the the the iteration cycle is so fast. You've got to just get in there and, you know, ride the roller

coaster and because like do the things.

Don't wait for it to be perfect. do the

things that barely work because one of the things I've learned over a couple years at OpenAI is you go very quickly from this thing was completely impossible for computers like for as

long as we've been alive to this thing just kind of works and you know it's a little bit frustrating. It's imperfect.

the model makes mistakes, but it works five or 10% of the time to like three to six months later that thing now works 80% of the time and you would just never imagine not using AI for that thing

again. But you're not going to keep up

again. But you're not going to keep up with that if you're not trying in the stages when it's working at 5 or 10%.

And you just you have to get uh you have to get everyone thinking that way. I

will say I went through uh a bit of a transformation relatively recently even with codeex uh which is our coding agent where I I mean I'd been using it before but sporadically it hadn't changed how I

work and then uh we launched a product relatively recently called prism and it's a small team and I was you know it's kind of all hands- on deck so I was contributing code and I was starting to

use codeex all the time and I got to a point where I was sitting in a meeting and uh I you my boss's staff meeting, right? So, I it was like time to go in,

right? So, I it was like time to go in, close the laptops, pay attention and I closed my laptop and I was like, "Oh, shoot. I didn't get a codeex job running

shoot. I didn't get a codeex job running before I closed my laptop."

>> I just missed opportunity.

>> I just missed an hour of productivity. I

could have been fixing something, building a new feature. I mean, Codeex working for me to do that.

>> Fascinating.

>> Same thing. Before I go to bed, I'm like, "Okay, what really hard thing can I give Codeex to do while I sleep?" and

keep working on your completely change the way that I think.

>> Yeah.

>> Uh but you know it's like that that sort of um that that sort of approach is you could build so much faster.

>> It's like have 10 things going in parallel.

>> The other thing you taught me, we were actually in the gym at like 10 o'clock at night, Kevin and I, after a board meeting in DC and um you said to me, he's like, you know, the first time that your engineers use this, they're not

going to be great at it. So don't um uh don't expect greatness in the first time, but if you got to make sure that you get everyone to start using it. And

since then um you know close to 80% of our engineers are using it now on a regular basis. And

regular basis. And >> that's awesome.

>> Lo and behold, you know, we now have the first product that's going to be 100% written um with with codeex. And that's

amazing.

>> Yeah, that was very cool.

>> Yeah, you're totally right. It's a skill like anything else. and you you learn how to channel like when when the agent doesn't does something imperfect, you

actually take that and you know take the lesson and put it in your agents.mmd

file so the next time the the the AI knows not to make that same mistake again. So it is a totally different way

again. So it is a totally different way of working but man when you embrace it I mean first of all it's incredible that you have a team that is entirely building off of codeex my prediction is that very quickly you'll have two three

four five teams >> the goal is half a dozen teams by the end of 26 at least. Yeah, I bet you'll beat that handily. And um that the transformation that you are you and

Chuck are driving is just fantastic. But

um I also bet that the way that happens is other teams look at that team >> and go, "Hey, why can't we move as fast as they are?"

>> Totally.

>> Yeah. Okay. So, let's talk about AI and science. Um

science. Um what is so significant about this? what

made you say, you know what, that's where I want to spend the next how many other years of my life doing and and and what have you learned?

>> Yeah, I I mean, so OpenAI, our mission is to build AGI and make it beneficial for all of humanity. I can't think of many ways to that that AGI will more

positively benefit humanity than being able to accelerate science. So, our

mission is to accelerate. You

>> studied physics in undergrad? You're

like the perfect kind of person to >> I study physics. I have I have most of a PhD and then I dropped out and started working at startups. Most of my colleagues have actual PhDs. Um but if

we can if we can do if we can truly accelerate science, if we can do the next 25 years of science in the next five years instead and that means that

we will be sitting there in 2030 with the technology and the science of 2050.

How amazing is that? I mean think of how science shapes our lives. the devices

that we use, the medicine that we take, uh it's it's everywhere. And so for the people that are not scientist, what meaningful thing can you work on?

>> So for the people that are not scientists, give us real examples of where this could meaningfully change day-to-day lives.

>> Yeah. So the analogy that I use is I think it's a good one for this audience too. So in 2025,

audience too. So in 2025, AI completely changed software engineering, right? At the beginning of

engineering, right? At the beginning of 2025, if you were if you were using codeex to write most of your code, you were an early adopter. By the end of 2025, just 12 months later, if you were

not using codecs or claude code or whatever, right, to write most of your code, you were falling behind. Teams

that were were were moving faster than you are. And that was the entire it's an

you are. And that was the entire it's an entire huge multi-t trillion dollar industry that changed in 12 months. I

think the same thing will happen with science in 2026. And we're we're already seeing examples of in >> 26 itself.

>> In 26 itself. And I don't mean by the end of the year we're going to have solved science. Of course not. Right.

solved science. Of course not. Right.

But but that same thing where if you're if you're using AI, if you're a scientist and you're using AI to do to to collaborate in your work heavily today, you're an early adopter. But is

starting to pay dividends. Like just in the last few months, actually really just in January, we've seen a huge number of open mathematical problems. problems that the best mathematicians in

the world have tried to solve for years and haven't. We've seen a number of

and haven't. We've seen a number of those fall to AIdriven solutions mostly by GPT 5.2. So it's not just doing the thing that you think of with AI which is

you know it's read a lot of information.

It knows how to bring information efficiently together to answer questions. No, it's going beyond the

questions. No, it's going beyond the frontier of human understanding and what humans have been able to do. And it's

not just mathematics. We're seeing it in physics. We're seeing it in biology,

physics. We're seeing it in biology, chemistry, material science. And and

again, I'm not saying every problem is solvable. The models can't do everything

solvable. The models can't do everything yet, but they sure can solve open problems that some of the best scientists in the world haven't yet been able to solve.

>> Give us like a flavor of an example of what is there going to be a new material that gets built in 26 that we've never had before?

>> Yeah. So, take material science. Yeah.

Right. you in material science experiments are really expensive, right?

It's it's very uh time consuming and often money consuming to go run experiments in the real world. So today

you've got material science that that do their best to to bring together everything that they know to try and design, you know, a material with a certain set of properties and then they go and try and, you know, create that in

the lab with the best hypothesis that they can have. Uh I was talking to a scientist the other day and he said, 'You know what AI is for me? It's a

metal detector for hypothesis.

>> So of all the things that I could think about, it brings together disperate information. It helps me sort through a

information. It helps me sort through a whole bunch of ideas and land at the best possible way only the bad ones I'm trying to think. Don't waste your time.

>> And so you know that's its own form of acceleration. Instead of doing 10 things

acceleration. Instead of doing 10 things and nine of them don't work, you land on the one that's going to work right away.

But then you start thinking about what about you know robotics robotic labs because you don't need graduate students pipetting things you know and they need to go to sleep and they have other

things going on you you I think we will very quickly live in a world where AI is helping refine which experiment you're running or which set of experiments

you're running and then it uh you know delivers the set of experiments to a set of robotic arms in a lab that are able to run the experiments. ments themselves

with as much parallelism as you want. By

the way, you can scale that horizontally and then the results of those experiments are piped back into the AI.

The AI reasons some more, designs a new set of experiments and you go, you know, think of how much faster that moves than a traditional experimental setting.

That's going to be the norm. I mean,

we're going to see it this year. It'll

be the norm very soon. Um, and we will move faster as a society. We will

discover more things. We will solve more problems this way. And I think that's really exciting. So, so the the the

really exciting. So, so the the the under underlying apparatus and tooling for scientific collaboration is what you're talking about that you

will have um at codeex level in 2026.

>> Yeah, it's that kind of transformation where where at the beginning of the year you've got early adopters and they're seeing value but it's certainly not super widespread and by the end of the year it's like okay >> the world has changed.

>> Again, I'm not claiming every problem will be solved. Certainly not every problem in software engineering is solved either, but you can't deny that the world is completely different if you're a software engineer. And I think we're going to see that we will live in

a very different world for >> for scientists as well in a very good way.

>> And how how long have you been um um doing the AI for science thing now?

>> Like three or four months.

>> Okay. Yeah.

>> What's what's the most surprising discovery that you had in the time that you were doing it?

>> Um >> that was unexpected. Like you're like, "Oh my goodness, I >> Well, I mean So, I I think there's something really interesting about how quickly we all as

humans adapt to the pace of AI. Um, like

are people people that aren't from here, have you written in a Whimo yet? If you

haven't, you totally should because I guarantee you like, well, at least for me, my first 10 seconds in a Whimo were like, "Oh my god, watch out for that bicycle." You know, like holding on to

bicycle." You know, like holding on to whatever I can hold on to. And then the next, you know, five minutes you're like, whoa, I'm being driven around San

Francisco by a robot. And you're just like, I am living in the future. This is

amazing.

>> And then 5 minutes later, I'm bored looking at my phone like scrolling through the seats.

>> You know, how quickly this thing that blew my mind is suddenly totally p and you know, day regard to me. And like I think we see that. So I mean you go back

six months ago, a year ago, the idea that that AI would be solving open problems in mathematics was completely ridiculous like couldn't happen just you know and people would say oh it's never

going to happen. Now here we are and we're like well >> yeah but it's not the remon hypothesis >> right right >> uh it's not that open problem it's only this and you know we're gonna do that

>> all the way across exponentially faster >> so I I I think that's awesome actually I think it's really cool that we adapt that fast >> and do do you think we have the right

evals in um in science just like you like that was one of the areas where you couldn't tell the progress to some degree because we were not creative enough in the eval for actually even measuring how good the models have gotten.

>> Yeah, it's really interesting because you get to a point now where in order to evaluate the model on a thing that is at the literal frontier of a of a scientific field, there aren't that many

people that can do it, >> right? The model puts something out

>> right? The model puts something out there and and like um like in any in any field where you're sort of interacting at the edge of your abilities, especially at the frontier, the model is

is wrong as often it is right when it's trying to solve something. no one has solved before. And so you've got to tell

solved before. And so you've got to tell apart like sometimes subtle things at the frontier, you know, in physics even, which was as close to my field as there

was, I can't do it. And so we have physicists on our team who are like on leave from a university because we need that level of ability in order to

understand these things and to build new evals. So it does get really hard. Um,

evals. So it does get really hard. Um,

and at some point we're going to need to use the model to build evals for, you know, the next phase of the model because we're going to be we're going to be, you know, at the edge and maybe past

where humans can can do.

>> I asked this question of Creger and I thought I'd ask you the same, which is what are things that you are worried about and also what are uh areas that you're really excited about right now

which have a new set of possibilities that you did not even think possible last year?

>> Yeah. Uh,

>> and that might have been that might be science is the answer. But, uh,

>> science is definitely part of the answer. Um,

answer. Um, >> what are you worried about most? And and

let's talk about something beyond safety and security because we've talked about that a fair amount during >> Yeah. Yeah. Yeah. It's it's

>> Yeah. Yeah. Yeah. It's it's

I think that the unknown thing is just like what society is going to change is going to have to change really fast and what is that going to look like? Do you

think universal basic income is is like um a foregone conclusion that we're going to have to have?

>> I don't know. I I So I am a I am not a person who believes that we will be happy sitting around like eating grapes and collecting our UBI uh and writing

poetry and you know I I I just I I I think we as humans strive to do something bigger than ourselves and to accomplish things. I don't think that

accomplish things. I don't think that goes away. So, I I think we will and

goes away. So, I I think we will and that's why that's what gives me confidence ultimately that we're going to get through all of this change just fine because humans are incredibly adaptable and um full of ingenuity and

drive. So, I I'm optimistic, but I do

drive. So, I I'm optimistic, but I do think there will be a lot of change. Um,

one of the things that's most exciting though is just like we can all create.

If you have an idea right now and you know say say you had an idea two years ago and you're not an engineer and it involved writing code that was kind of hard to deal with, right? You had to go find somebody or go hire a team

somewhere or whatever and probably the the activation energy was just too much and you didn't do it. Now literally you can go write a prompt in codeex and it'll go work for an hour and you'll have a working version of whatever

you're going to whatever you were thinking about and then you can go back and iterate and you know make it better.

There is no excuse not to be creating whatever you can think of.

>> So what becomes a scarce resource in that that world?

>> Well, I mean we believe compute becomes a scarce resource, >> right? No, but I mean like from does

>> right? No, but I mean like from does does judgment become um >> I think like judgment, agency, >> I think it this moment selects for people who are high agency more than

ever before. someone who says, "You know

ever before. someone who says, "You know what? I have an idea and rather than

what? I have an idea and rather than letting it stay an idea, I'm gonna go build it >> and I'm gonna build it this morning and I'm gonna have it in the afternoon and then I'm gonna improve it and by the end of the day I'm gonna have a thing that I didn't have this morning."

>> Um, I think those people who are high agency, who are curious, who are learners, and who are just going to like, you know, use the new tools to to accomplish even more like I think that

is uh I hope that's not a scarce resource. I hope that's a very common

resource. I hope that's a very common resource but that's what's that's what the future is going to select for I think >> last question why did you join the Cisco board >> uh I mean what a first of all what a

what an iconic company important for any number of things including our national security also uh what a critical moment uh where we need what Cisco delivers

even more and Cisco has an incredible opportunity with the the way that the world is building out infrastructure. We

really do believe compute is one of the most valuable resources. You know,

basically like the more compute you have, the more intelligence you can provide. And I think there will be an

provide. And I think there will be an infinite demand for intelligence. So

Cisco has an incredible opportunity. You

and Chuck and the team are driving uh one of the most impressive transformations I've ever seen with respect to, you know, a big company uh reacting and changing and taking advantage of what AI has to offer. I

just think those are the most interesting moments of a company. Yeah.

>> Um, so I'm just, you know, proud to be a part of it.

>> We we we love you being Is there a question I didn't ask that you wish I'd asked?

>> What are you most excited about? I'm

going to turn it around on you. What are

you most excited about? And what are you most excited about?

>> This is I'm supposed to be interviewing.

I know.

>> Um, no. I I actually think, um, one of our biggest concerns that we've had, um, we have always been resource constrained.

>> Yeah.

>> On the ideas that we have that we want to prosecute, that we have a we have an idea factory that's way larger than the resources to prosecute them. I think we might actually change that balance u over the course of and and largely

because of the you know kind of body of work and partnership that we're doing with you and >> and others but largely with with open AI but I I do feel like if if this thing

starts to really crank where our AI products are the first ones that actually get self-written and then eventually like we want to make sure that Martin is actually building silicon

that is being done 80 90% with with AI like you you can start too. And it's not just for the building products fast for the sake of building them fast. I think

um it's very seldom in life that you get a chance to be part of a movement that look I'm 54. This is probably the last big shift that I'll see in the next

decade and this is a seismic one. It

could change the shape of humanity and we actually can participate in making that happen and it can happen without the constant worry of me having to keep bugging Chuck saying we need more money,

we need more money and we do it after.

Now, by the way, Chuck, we'll still need some more money for a while, but like >> [laughter] >> um um >> I'm turning on the recorder. Um, but I I do feel like that's the thing that's the

most exciting is if if we get to build something that is so magical at very fast clock speeds, um, then the only thing that we have to be extremely careful of is that we don't

have AI slop in the market and that we build things that are truly with a level of care and craftsmanship and judgment and um, and intuition. But I feel like

we we as a as a community and especially at Cisco with the culture we've built, we'd be very good at that. Um and then you know not being constrained on certain areas where we're like, man, I wish I could have prosecuted that idea.

>> Yeah.

>> Seems like a very very fun way to spend the next, you know, decade.

>> Yeah, I love it.

>> Thank you for being here.

>> Yeah, thank you for having me.

[applause] Thanks man.

Loading...

Loading video analysis...