Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI
By No Priors: AI, Machine Learning, Tech, & Startups
Summary
Topics Covered
- Code Shift to Agent Delegation
- Token Throughput Replaces FLOPs
- Claws Automate Home Systems
- Auto Research Enables Self-Improvement
- Digital Unhobbling Precedes Physical
Full Transcript
Code's not even the right verb anymore, right? But I have to express my will to my agents for 16 hours a day. Manifest. Manifest. How can I have not just a single session of plot code or codex or some of these agent harnesses?
How can I have more of them? How can I do that appropriately? The agent
part is now taken for granted. Now the claw-like entities are taken for granted. And
now you can have multiple of them. And now you can have instructions to them.
And now you can have optimization over the instructions. But I mean, this is why it gets to the psychosis is that this is like infinite and everything is skill issue.
Hi, listeners. Welcome back to No Priors. Today, I'm here with Andre Karpathy, and we have a wide-ranging conversation for you about code agents, the future of engineering and AI research, how more people can contribute to research, what's happening in robotics, his prediction for how agents can reach out into the real world, and education in this next age.
Welcome, Andre. Andre, thanks for doing this. Yeah, thank you for having me. So it's
been a very exciting couple of months in AI. Oh, yeah. You could say that.
I remember walking into the office at some point and you were like really locked in and I was asking what you were up to and you're like, I just, I have to code for 16 hours a day or code's not even the right verb anymore, right? But I have to express my will to my agents for 16 hours a day. Manifest. Because like there's been a jump in capability.
What's happening? Tell me about your experience. Yeah, I kind of feel like I was in this perpetual, I still am often in this state of AI psychosis, just like all the time, because there was a huge unlock in what you can achieve as a person, as an individual, right? Because you were bottlenecked by, you know, your typing speed and so on. But now with these agents, it really, I would say in
December is when it really just something flipped where I kind of went from 80-20 of like, you know, to like 20-80 of writing code by myself versus just delegating to agents. And I don't even think it's 20-80 by now. I think it's a
to agents. And I don't even think it's 20-80 by now. I think it's a lot more than that. I don't think I've typed like a line of code. probably
since December, basically, which is like an extremely large change.
I was talking to it, like, for example, I was talking about it to, for example, my parents and so on. And I don't think like a normal person actually realizes that this happened or how dramatic it was. Like literally, like if you just find a random software engineer or something like that at their desk and what they're doing, like their default workflow of, you know, building software is completely different as of
basically December. So just like in the state of psychosis of trying to figure out
basically December. So just like in the state of psychosis of trying to figure out like what's possible uh trying to push it to the limit how is it how can i have not just a single session of you know um cloud code or codex or some of these agent harnesses how can i have more of them how can i do that appropriately and then how can i use these claws what are
these claws uh and uh so there's like a lot of new things i want to be at the forefront of it you know and i'm very and see that I'm not at the forefront of it. And I see lots of people on Twitter doing all kinds of things and they all sound like really good ideas and I need to be at the forefront or I feel extremely nervous. And so I guess
I'm just in the psychosis of like, what's possible? Because it's unexplored fundamentally. Well, if
you're nervous, the rest of us are nervous. We have a team that we work with at Conviction that their setup is everybody is like you know, none of the engineers write code by hand and they're all microphoned and they just like whisper to their agents all the time. It's the strangest work setting ever. And I thought they were crazy. And now I like, I fully accept. I was like, oh, this was
were crazy. And now I like, I fully accept. I was like, oh, this was the way, like you're just ahead of it. What, how do you think about your own capacity now to like explore or to do projects? Like what is it limited by? Yeah, what is it limited by? Just I think everything like so many things,
by? Yeah, what is it limited by? Just I think everything like so many things, even if they don't work, I think to a large extent you feel like it's a skill issue. It's not that the capability is not there. It's that you just haven't found a way to string it together of what's available. Like I just don't I didn't give good enough instructions in the agent's MD file or whatever it may
be. I don't have a nice enough memory tool that I put in there or
be. I don't have a nice enough memory tool that I put in there or something like that. It all kind of feels like skill issue when it doesn't work to some extent. You want to see how you can paralyze them, et cetera. And
you want to be Peter Steinberg, basically. So Peter is famous. He has a funny photo where he's in front of a monitor with lots of, like, he uses codecs.
So lots of codecs agents tiling the monitor. And they all take about 20 minutes if you prompt them correctly and use the high effort. And so they all take about 20 minutes. They have multiple, you know, 10 repos checked out. And so he's just going between them and giving them work. It's just like you can move in much larger macro actions. It's not just like, here's a line of code, here's a
new function. It's like, here's a new functionality and delegate it to agent one. Here's
new function. It's like, here's a new functionality and delegate it to agent one. Here's
a new functionality that's not going to interfere with the other one, give it agent two. And then try to review their work as best as you can, depending on
two. And then try to review their work as best as you can, depending on how much you care about that code. Like, what are these macro actions that I can manipulate my software repository by? And like, Another agent is doing some research, another agent is writing code, another one is coming up with a plan for some new implementation. And so everything just happens in these macro actions over your repository. And
implementation. And so everything just happens in these macro actions over your repository. And
you're just trying to become really good at it and develop a muscle memory for it is extremely... Yeah, it's very rewarding, number one, because it actually works. But it's
also kind of like the new thing to learn. So that's why, hence the psychosis.
Yeah, I do feel like my instinct is like, whenever I'm waiting for an agent to complete something, the obvious thing to do is like, well, I can do more work, right? Like if I have access to more tokens, then like I should just
work, right? Like if I have access to more tokens, then like I should just parallelize, add more tasks. And so that's very stressful because if you don't feel very bounded by your ability to spend on tokens, then, you know, you are the bottleneck in the system that has max capability. Yeah, if you're not maximizing your subscription, at least. And ideally for multiple agents. Like if you run out of the quota on
least. And ideally for multiple agents. Like if you run out of the quota on Codex, you should switch to Cloud or whatnot. I don't know. Like that's what I've been trying to do a little bit. And I feel nervous when I have subscription left over. That just means I haven't maximized my token throughput. So I actually kind
left over. That just means I haven't maximized my token throughput. So I actually kind of experienced this when I was a PhD student. You would feel nervous when your GPUs are not running. Like you have GPU capability and you're not maximizing the available flops to you. But now it's not about flops, it's about tokens. So what is your token throughput and what token throughput do you command? I would actually argue that
it's very interesting that we had, you know, at least 10 years where In many engineering tasks, people just didn't feel compute bound. And the entire industry feels that now. They felt resource bound. And now that you have this big capability jump, you're like, oh, actually it's not my ability to access the compute
anymore. I'm the binding constraint. Yeah, it's a skill issue, which is very empowering because
anymore. I'm the binding constraint. Yeah, it's a skill issue, which is very empowering because you could be getting better. So that's why I think it's very addictive because there's unlocks when you get better. Where do you think it goes? Like if you just think about like, okay, you know, Andre's iterating and everybody else is for 16 hours a day, getting better at using coding agents. Like what does it look like in
a year of like you've reached mastery? Yeah, what does mastery look like, right? At
the end of the year or like two, three years, five years, 10 years, et cetera. Well, I think everyone is basically interested in going up the stack. So I
cetera. Well, I think everyone is basically interested in going up the stack. So I
would say, yeah, it's not about a single session with your agent. Multiple agents, how do they collaborate, and teams, and so on. So everyone's trying to figure out what that looks like. And then I would say Claw is also kind of an interesting direction because it really, when I say Claw, I mean this layer that kind of takes persistence to a whole new level. It's something that keeps looping. It's not something
that you are interactively in the middle of. It kind of has its own little sandbox, its own little... you know, it kind of like does stuff on your behalf, even if you're not looking kind of thing. And then also has like maybe more sophisticated memory systems, et cetera, that are not yet implemented in agents. So OpenClaw has a lot more sophisticated memory, I would say, than what you would get by default,
which is just a memory compaction when your context runs out, right? You think that's the piece that resonated for more users versus like perhaps like broader tool access? For
OpenClaw? Yeah. I think there's at least five things. There's a lot of really good ideas in here. Yeah. Good job, Peter. I mean, Peter has done a really amazing job. Um, I saw him recently, uh, and I talked to him about it and
job. Um, I saw him recently, uh, and I talked to him about it and I, He's very humble about it, but I think he innovated simultaneously in like five different ways and put it all together. So for example, like the SoulMD document, like he actually really crafted a personality that is kind of compelling and interesting. And I
feel like a lot of the current agents, they don't get this correctly. I actually
think Claude has a pretty good personality. It feels like a teammate and it's excited with you, et cetera. I would say, for example, Codex is a lot more dry, which is kind of interesting because in ChashiPT, Codex is like a lot more upbeat and highly sycophantic. But I would say Codex, the coding agent, is very dry. It
doesn't seem to care about what you're creating. It's kind of like, oh, I implemented it. It's like, okay, but do you understand what we're building? It's true. You know,
it. It's like, okay, but do you understand what we're building? It's true. You know,
it doesn't... And the other thing I would say is, for example, with Claude, I think they dialed the Psycho Fantasy fairly well, where when Claude gives me praise, I do feel like I slightly deserve it. Because sometimes I kind of give it like not very well-formed thoughts, and I give it an idea that I don't think is fully baked, and it doesn't actually react very strongly. It's like, oh yeah, we can
implement that. But when it's a really good idea, by my own account, it does
implement that. But when it's a really good idea, by my own account, it does seem to reward it a bit more. And so I kind of feel like I'm trying to like earn its praise, which is really weird. And so I do think the personality matters a lot. And I think a lot of the other tools maybe don't appreciate it as much. And I think in this aspect also, Peter really cares
about this. And so that was correct. And then the memory system and then just,
about this. And so that was correct. And then the memory system and then just, you know, he's just having fun with this. And then the single WhatsApp portal tool of the automation. Yeah. Is there something that you have done personally? with your claws beyond software engineering that you think is fun or interesting? Yeah. So in January, I had a claw. I went through a period of claw psychosis. So I built, um,
I have a claw basically that takes care of my home and I call him Dobby the elf, a claw. Um, And basically I used the agents to find all of the smart home subsystems of my home on the local area network, which I was kind of surprised that worked out of the box. Like I just told it that I think I have Sonos at home. Like, can you try to
find it? And it goes and like IP scan of all the basically computers on
find it? And it goes and like IP scan of all the basically computers on the local area network. And it found the Sonos thing, the Sonos system and it turned out that there's no password protection or anything like that. It just logged in and it's like, oh yeah, you have these Sonos systems installed. I let me try to reverse engineer how it's working. It does some web searches and it finds like,
okay, these are the API endpoints. And then it's like, do you want to try it? And I'm like, whoa, like you just did that. And I'm like, yeah, can
it? And I'm like, whoa, like you just did that. And I'm like, yeah, can you try to play something in the study? And it does. And music comes out and I'm like, I can't believe I just That's crazy. That's like three prompts. I
can't believe I just typed in, like, can you find my Sonos and that suddenly it's playing music. And it did the same for lights. And so basically, like, it kind of hacked in, figured out the whole thing, created APIs, created a dashboard so I could see the command kind of center of, like, all of my lights in the home. And then it was, like, switching lights on and off. And, you know,
the home. And then it was, like, switching lights on and off. And, you know, so I can ask it, like, there'll be a sleepy time. And when it's sleepy time, that just means all the lights go off, et cetera. So it controls all of my lights, my HVAC, my shades, the pool and the spa, and also my security system. So I have a camera pointed outside of the house. And anytime someone
security system. So I have a camera pointed outside of the house. And anytime someone rolls in, I have a Quinn model that looks at the videos. So first of all, there's change detection. And then based on change detection, it goes to Quinn. And
then it actually tells me, it sends me a text to my WhatsApp. It shows
an image from the outside. And it says, hey, a FedEx truck just pulled up, FedEx truck just pulled up, and you might want to check it, and you got a new mail or something like that. And Dobby just texted me this. It's really
incredible. So Dobby is in charge of the house. I text with it through WhatsApp.
And it's been really fun to have these macro actions that maintain my house. I
haven't really pushed it way more beyond that. And I think people are doing a lot more crazy things with it. But for me, even just a home automation setup, I used to use like six apps, completely different apps. And I don't have to use these apps anymore. Adobe controls everything in natural language. It's amazing. And so I think I haven't even pushed a paradigm fully, but already that is so helpful and
so inspiring, I would say. Do you think that's indicative of like what people want from a user experience perspective with software? Right? Because I don't think, you know, it's pretty ignored that it takes humans effort to like learn new software, like new UI.
Yeah, I think to some extent, that's right. It's like working backwards from how people think an AI should be. Because what people have in their mind of like what an AI is, is not actually what an LLM is by like in a raw sense, like LLM is a token generator, you know, like more tokens come out. But
what they think of is like this, this, persona identity that they can tell stuff and it remembers it, you know, and it's just kind of an entity behind the WhatsApp. It's like a lot more understandable. So I think to some extent it's like
WhatsApp. It's like a lot more understandable. So I think to some extent it's like matching the expectations that humans already have for what AI should behave. But under the hood, there's like a lot of technical details go into that. And LLMs are too raw of a primitive to actually type check as AI, I think for most people, if that makes sense. Yeah. I think that's like how we understand what the AI
is and like the... description of it as Dobby or some personality obviously resonates with people. I also think that the unification that you did across your six different software systems for your home automation speaks to a different question of like, do people really want all the software that we have today? Yeah. Right. Because I would argue like, well, you have the hardware, but
have today? Yeah. Right. Because I would argue like, well, you have the hardware, but you've now thrown away the software or the UX layer of it. Do you think that's what people want? Yeah, I think there's this like, There's a sense that these apps that are in the app store for using these smart home devices, et cetera, these shouldn't even exist kind of in a certain sense. Like shouldn't it just be
APIs and shouldn't agents be just using it directly? And wouldn't it like I can do all kinds of home automation stuff that any individual app will not be able to do. Right. And then LLM can actually drive the tools and call all the right tools and do do pretty complicated things. And so in a certain sense, it does point to this, like maybe there's like an overproduction of lots of
custom bespoke apps that shouldn't exist because agents kind of like crumble them up and everything should be a lot more just like exposed API endpoints. And agents are the glue of the intelligence that actually like tool calls all the the parts. Another example
is like my treadmill. There's an app for my treadmill and I wanted to like keep track of how often I do my cardio. But like I don't want to like log into a web UI and go through a flow and etc. Like all this should just be like make APIs available. And this is kind of, you know, going towards the agentic sort of web or like agent first tools and all this
kind of stuff. So I think the industry just has to reconfigure in so many ways that it's like the customer is not the human anymore. It's like agents who are acting on behalf of humans. And this refactoring will be will probably be substantial in a certain sense. One way that people sometimes push back on this is like, do we expect people to vibe code some of these tools? Do we expect normal
people to do this kind of stuff that I described? But I think to some extent, this is just, you know, technology as it exists today. And right now there is some vibe coding and I'm actually watching it and I'm working with the system, but I kind of feel like this kind of stuff that I just talked about, this should be free, like in a year or two or three, there's no vibe
coding involved. this is trivial. This is table stakes. This is like any AI, even
coding involved. this is trivial. This is table stakes. This is like any AI, even the open source models, et cetera, can like do this. You should be able to translate from a less technical human's intent very easily to this outcome. Yeah.
Today it's vibe coding and it's involved and not many people are going to do it. And you still have to make some design decisions, right? We were talking about
it. And you still have to make some design decisions, right? We were talking about like, well, you take frames, for example. Yeah. But I kind of feel like this will just start to, the barrier will just come down and it's just ephemeral software on your behalf and some kind of like Claw is handling all the details for you, but you're not involved. Claw has a machine and it will figure it out.
And it's just presenting your UIs and you're like saying stuff, you know? Why haven't
you, I guess, like pushed the boundaries of what you can do personally with Claw?
Like, is it, you know, you're focusing on more important projects, auto research, et cetera, or you're climbing the hill to mastery or something else, right? Yeah, I just feel like I'm so distracted by everything. So I spend, I spent like a week on the class stuff and I have more to do's almost. But I will say that.
Like Jensen told us, we're all just busier, unfortunately. Yeah. I didn't really take advantage of a lot of like email and calendar and all this other stuff. And I
didn't give it access because I'm still a little bit like suspicious and still very new and rough around the edges. So I didn't want to give it like full access to my digital life yet. And part of it is just less security, privacy and just being very cautious in that in that realm and um so some of it is like held back by that i would say yeah maybe that's like the
dominant dominant feature but some of it is also just i feel so distracted because i feel like i had a week of claw and then other stuff is happening and what was the um i mean you've talked about like being able to train or at least optimize a a a model as a task you want to see agents do for a long time like what was the motivation behind auto research auto
research yeah so i think like i had a tweet earlier where i kind of like said something along the lines of To get the most out of the tools that have become available now, you have to remove yourself as the bottleneck. You can't
be there to prompt the next thing. You need to take yourself outside. You have
to arrange things such that they're completely autonomous. And the more, you know, how can you maximize your token throughput and not be in the loop? This is the goal.
And so I kind of mentioned that the name of the game now is to increase your leverage. I put in just very few tokens just once in a while, and a huge amount of stuff happens on my behalf. so auto research like i tweeted that and i think people liked it and whatnot but they haven't like maybe worked through like the implications of that and for me auto research is an example
of like an implication of that where it's like i don't want to be like the researcher in the loop like looking at results etc like i'm i'm holding the system back so the question is how do i refactor all the abstractions so agents running for longer periods of time without your involvement, doing stuff on your behalf.
And auto research is just, yeah, here's an objective, here's a metric, here's your boundaries of what you can and cannot do, and go. And yeah, it worked. You were
surprised at its effectiveness. Yeah, I didn't expect it to work because, so I have the Project NanoChat. And fundamentally, like, I think a lot of people are very confused with my session for like training GPT-2 models and so on. But for me, training GPT models and so on is just a little harness, a little playground for training LLMs. And fundamentally, what I'm more interested in is like this idea of recursive self-improvement
and to what extent you can actually have LLMs improving LLMs. Because I think all the Frontier Labs, this is like the thing. for obvious reasons. And they're all trying to recursively self-improve, roughly speaking. And so for me, this is kind of like a little playpen of that. And I guess I like tuned NetoChat already quite a bit by hand in a good old fashioned way that I'm used to. Like I'm a
researcher, I've done this for like, you know, two decades. I have some amount of like, what is the opposite of hubris? Instinct for it. Yeah. Earned the confidence. Okay.
I have like two decades of like, oh, I've trained this model like thousands of times of like, so I've done a bunch of experiments. I've done hyperprimary tuning. I've
done all the things I'm very used to and I've done for two decades. And
I've gotten to a certain point and I thought it was like fairly well tuned.
And then I let our research go for like overnight and it came back with like, tunings that I didn't see. And yeah, I did forget the weight decay on the value embeddings and my atom betas were not sufficiently tuned. And these things jointly interact. So once you tune one thing, the other things have to potentially change too.
interact. So once you tune one thing, the other things have to potentially change too.
I shouldn't be a bottleneck. I shouldn't be running these hyperparameters, such optimizations. I shouldn't
be looking at the results. There's objective criteria in this case. So you just have to arrange it so that it can just go forever. So that's a single sort of version of auto research, of like a single loop trying to improve. And I
was surprised that it found these things that I you know, the repo is already fairly well tuned and still found something. And that's just a single, it's a single loop. Like these frontier labs, they have GPU clusters of tens of thousands of them.
loop. Like these frontier labs, they have GPU clusters of tens of thousands of them.
And so it's very easy to imagine how you would basically get a lot of this automation on smaller models. And fundamentally everything around like frontier level intelligence is about extrapolation and scaling loss. And so you basically do a ton of the exploration on the smaller models, and then you try to extrapolate out. So you're saying our research efforts are gonna get more efficient Like we're gonna have better direction for when we
scale as well. If we can do this experimentation better. Yeah, I would say that like the most interesting project and probably what the frontier labs are working on is, you know, you experiment on the smaller models, you try to make it as autonomous as possible, remove researchers from the loop. They have way too much, what is the, what is the opposite? Earned confidence? Yeah. Yeah, they don't know. They shouldn't be touching
any of this really. And so you have to like rewrite the whole thing because right now, I mean, certainly they can contribute ideas, but okay, they shouldn't actually be enacting those ideas. There is a queue of ideas and there's maybe an automated scientist that comes up with ideas based on all the archive papers and GitHub repos and it funnels ideas in or researchers can contribute ideas, but it's a single queue and
there's workers that pull items and they try them out and whatever works just gets sort of put on the feature branch and maybe some people like monitor the feature branch and merge to the main branch sometimes. So Yeah, just removing humans from all the processes and automating as much as possible and getting high tokens per second
throughputs. And it does require rethinking of all the abstractions and everything has to be
throughputs. And it does require rethinking of all the abstractions and everything has to be reshuffled. So, yeah, I think it's very exciting. If we take one more recursive step
reshuffled. So, yeah, I think it's very exciting. If we take one more recursive step here, when is the model going to write a better program MD than you?
Yeah. So program MD is like... We're not in the loop. Yeah, exactly. So program
MD is my crappy attempt at describing like how the auto researcher should work like oh do this then do that and that and then try these kinds of ideas and then here maybe some ideas like look at architecture look at optimizer etc but i just came up with this in markdown right um and so uh yeah exactly you want some kind of an auto research loop maybe that looks for you can
imagine that different program.nds would um would give you different progress. So
basically every research organization is described by ProgramMD. A research organization is a set of markdown files that describe all the roles and how the whole thing connects. And you
can imagine having a better research organization. So maybe they do fewer stand-ups in the morning because they're useless. And this is all just code, right? And so one organization can have fewer stand-ups, one organization can have more. One organization can be very risk-taking, one organization can be less. And so you can definitely imagine that you have multiple research orgs. And then they all have code. And once you have code, then you
research orgs. And then they all have code. And once you have code, then you can imagine tuning the code. So 100% there's like the metalayer of it. Do you
see my text about my contest idea? My contest idea was like, let people write different program MDs, right? And so for same hardware, where do you get most improvement? Oh, I see. And then you can take all that data and then
most improvement? Oh, I see. And then you can take all that data and then give it to the model and say, write a better program MD. Yes, yes. Yeah,
exactly. We're going to get something better. Like, there's no way we don't. You can
100% look at where the improvements came from and like, can I change the program MD such that more of these kinds of things would be done or like things that didn't work? Just meta optimization. Yeah, you can 100% imagine doing that. So I
think this is a great idea. But it's like, you know, I think like you sort of go one step at a time where you sort of have one process and then second process and then the next process. And these are all layers of an onion, like the LLM sort of part is now taken for granted. The agent
part is now taken for granted. Now the claw-like entities are taken for granted. And
now you can have multiple of them. And now you can have instructions to them.
And now you can have optimization over the instructions. And it's just like a little too much, you know? But I mean, this is why it gets to the psychosis is that this is like infinite and everything is skill issue. And that's why I feel like, yeah, that's just coming back to, this is why it's so insane. Okay.
Well, if we're just trying to like diagnose the current moment and what is a relevant skill right now, what do you think is the implication that this is the loop we should be trying to achieve in different areas and that it works? Like,
you know, remove, create the metric or create the ability for agents to continue working on it without you. Do we still have performance engineering? Yeah. I mean, so there's a few caveats that I would put on top of the LM Psycho Assistant.
Number one, This is extremely well suited to anything that has objective metrics that are easy to evaluate. So for example, like writing kernels for more efficient CUDA code for various parts of a model, etc. are the perfect fit. Because you have inefficient code and then you want efficient code that has the exact same behavior, but it's much faster. Perfect fit. So a lot of things are perfect fit for auto research, but
faster. Perfect fit. So a lot of things are perfect fit for auto research, but many things will not be. And so they, it's just, if you can't evaluate it, then you can't auto research it, right? So that's caveat number one. And then maybe caveat number two, I would say, is we're kind of talking about the next steps and we kind of see what the next steps are. But fundamentally, the whole thing
still doesn't, it's still kind of like bursting at the seams a little bit and there's cracks and it doesn't fully work. And if you kind of try to go too far ahead, the whole thing is actually net not useful, if that makes sense.
Because these models still are not, they've improved a lot, but they're still rough around the edges, is maybe the way I would describe it. I simultaneously feel like I'm talking to an extremely brilliant PhD student who's been like a systems programmer for their entire life and a 10 year old. And it's so weird because humans, like there's, I feel like they're a lot more coupled. Like you have to, you know, everything
is a lot more coupled. Yes, you wouldn't encounter that combination. This jaggedness is really strange. And humans have a lot less of that kind of jaggedness, although they definitely
strange. And humans have a lot less of that kind of jaggedness, although they definitely have some, but humans have a lot more jaggedness. Sorry, the agents have a lot more jaggedness where sometimes like, know i asked for functionality and it like comes back with something that's just like totally wrong and then we get into loops that are totally wrong and then i'm just i get so frustrated with the agents all the
time still because you feel the power of it but you also they're still like it does nonsensical things once in a while for me still as well i get very annoyed when um uh i feel like the agent wasted a lot of compute on something it should have recognized was an obvious problem yeah I think some of the bigger things is like, maybe what's underneath it, if I could hypothesize, is
fundamentally these models are trained via reinforcement learning. So they're actually struggling with the exact same thing we just talked about, which is the labs can improve the models in anything that is verifiable, but that has rewards. So did you write the program correctly and do the unit test checkout, yes or no? But some of the things where they're struggling is like, for example, I think they have a tough time with nuance
of maybe what I had in mind or what I intended and when to ask clarifying questions. Yeah, it's just anything that feels softer is
clarifying questions. Yeah, it's just anything that feels softer is like worse. And so you're kind of like, you're either on Rails and you're part
like worse. And so you're kind of like, you're either on Rails and you're part of the super intelligence circuits, or you're not on Rails and you're outside of the verifiable domains and suddenly everything kind of just like meanders. Like maybe another way to put it is if you go to, if today, if you go to like state-of-the-art model, chat GPT, and you ask it, tell me a joke. Do you know what
joke you're gonna get? There's the joke. the joke i do feel i i can't tell you like the you know standard form of it but i do feel like chat gpt has like three jokes yeah yeah so the the joke that apparently all the alums like left the most is why do scientists uh not trust atoms okay because they make everything up okay they make everything up so this is still that
emerge so this is the joke you would get like three or four years ago and this is the joke you still get today okay so even though the models have improved tremendously yeah and if you give them an agentic task they will just go for hours and move mountains for you and then you ask for like a joke and it has a stupid joke it's a crappy joke from five years ago
and it's because it's outside of the it's outside of the rl it's outside of the reinforcement learning it's outside of what's being improved it's like and it's part of the jaggedness of like shouldn't you expect models as they get better to also have like better jokes or more diversity of them or it's just it's not being optimized and it's stuck do you uh think that that implies that we are not
seeing like generalization in the sense of like broader intelligence of joke smartness being attached to code smartness? Yeah, I think there's some decoupling where some things are verifiable and some things are not and some things are optimized for arbitrarily by the labs depending on like what data went in and some things are not. And, um, and
I mean the premise, there's a premise from some research groups that if you're smarter, at code generation or in these verifiable fields, you should be better at everything. And
like the joke situation suggests that that's not happening in all fields. I don't think that's happening. Yeah, I don't think that's happening. I think maybe we're seeing like a
that's happening. Yeah, I don't think that's happening. I think maybe we're seeing like a little bit of that, but not like a satisfying amount. Yeah, that antagonist exists in humans. You can be very, very good at math and still tell a really bad
humans. You can be very, very good at math and still tell a really bad joke. Yeah, that's true. Yeah. But it just, it still means that we're not getting,
joke. Yeah, that's true. Yeah. But it just, it still means that we're not getting, like the story is that we're getting a lot of the intelligence and capabilities in all the domains of society like for free as we get better and better models.
And it's not like exactly fundamentally what's going on. And there's some blind spots and some things are not being optimized for. And this is all clustered up in these neural net opaque models, right? So you're either on rails of what it was trained for and everything is like, you're going to get speed of light or you're not.
And so it's the jaggedness. So that's why I think like, even though the progression is obvious, what should happen, you can't let it fully go there yet because it doesn't, fully work or it's a skill issue and we just haven't figured out how to use it. So, you know, it's hard to tell. Can I ask kind of a blasphemous question, which is like if this jaggedness is persisting and it's
all rolled up in a at least monolithic interface, right? But, you know, single model.
Does that make sense or do you should it be unbundled in things that are can be optimized and improved against different domains of intelligence? Like unbundling the models into multiple experts in different areas, et cetera. More directly. um instead of just moe that we have no exposure to because that can be like confusing as a user from the outside which is like why is it so good at this but not
at this other thing yeah i think currently my impression is uh the labs are trying to have a single sort of like monoculture of a model that is uh arbitrarily intelligent in all these different domains and they just stuff into the parameters i do think that we will we i do think we should expect more speciation in the um intelligences um like, you know, the animal kingdom is extremely diverse
in the brains that exist. And there's lots of different niches of nature. And some
animals have overdeveloped visual cortex or other kind of parts. And I think we should be able to see more speciation. And you don't need like this oracle that knows everything. You kind of speciate it and then you put it on a specific task.
everything. You kind of speciate it and then you put it on a specific task.
And we should be seeing some of that because you should be able to have like much smaller models that still have the cognitive core, like they're still competent, but then they specialize and then they can become more efficient. in terms of latency or throughput on specific tasks that you really care about. Like if you're a mathematician working in Lean, I saw, for example, there's a few releases that really target that as
a domain. So there's probably gonna be a few examples like that where the unbundling
a domain. So there's probably gonna be a few examples like that where the unbundling kind of makes sense. One question I have is whether or not the capacity constraint on available compute infrastructure drives more of this because efficiency actually matters more. Like if
you, Financing aside, no financing that's involved in all of this. If you have access to full compute for anything you do, like leaving one single model, right? But if you actually feel pressure where you're like, I can't serve a model of massive size for every use case, like, do you think that leads to any speciation? Does that question make sense to
you? The question makes sense. And I guess like what I'm struggling with is I
you? The question makes sense. And I guess like what I'm struggling with is I don't think we've seen too much speciation just yet, right? We're seeing a monoculture of models. And there's clearly pressure for, like, make a good code model, put it back
models. And there's clearly pressure for, like, make a good code model, put it back in the merge again. Yeah. Even though there already is pressure on the models. I guess perhaps I feel like there's a lot of very short-term supply crunch, and maybe that causes more speciation now. Yeah, I think fundamentally, the labs are serving a model, and they don't really know what... the end user
is going to be asking about. So maybe that's like some part of it because they kind of have to multitask over all the possible things that could be asked.
But I think if you're coming to a business and maybe partnering on some specific problems you care about, then maybe you would see that there. Or there will be some very high value applications that are like more niche. But I think right now they're kind of like going after the totality of what's available. I don't think that the science of manipulating the brains is like fully developed yet, partly. What do you
mean manipulating? So like, so fine tuning without losing capabilities as an example. And we
mean manipulating? So like, so fine tuning without losing capabilities as an example. And we
don't have these primitives for actually like working with the intelligences in ways other than just context windows. Like context windows kind of just work and it's very cheap to manipulate, et cetera. And this is how we're getting some of the customization, et cetera.
But I think if it was, I think it's a bit more of a developing science of how you more deeply adjust the models, how you have continual learning maybe, or how you fine tune in a certain area, how you get better in a certain area, or how you actually touch the weights, not just the context windows. And
so it's a lot more tricky, I would say, to touch the weights than just the context windows, because you're actually fundamentally changing the full model and potentially its intelligence.
And so maybe it's just not a fully developed science, if that makes sense, of speciation. And it also has to be cheap enough for that speciation to be worthwhile
speciation. And it also has to be cheap enough for that speciation to be worthwhile in these given contexts. Can I ask a question about an extension to otter research that you described in terms of open ground? You said, okay, well, we have this thing. We need more collaboration surface around it, essentially, for people to
thing. We need more collaboration surface around it, essentially, for people to contribute to research overall. Can you talk about that? Yeah, so we talked about our research has a single thread of like, I'm going to try stuff in a loop.
But fundamentally, the parallelization of this is like the interesting component. And I guess I was trying to like play around with a few ideas, but I don't have anything that like clicks as simply as like, I don't have something that I'm like super happy with just yet, but it's something I'm like working on inside when I'm not working on my claw. So I think like one issue is if you have a
bunch of nodes of parallelization available to you, then it's very easy to just have multiple auto researchers talking through a common system or something like that. What I was more interested in is how you can have an untrusted pool of workers out there on the internet. So for example, in auto research, you're just trying to find the piece of code that trains a model to a very low validation loss. If
anyone gives you a candidate commit, it's very easy to verify that that commit is correct, is good. Like they, someone could claim from the internet that this piece of code will optimize much better and give you a much better performance. You could just check very easy, but probably a lot of work goes into that checking, but fundamentally they could lie and et cetera. So you're basically dealing with a similar kind of,
it's almost actually like looks a little bit like my designs that incorporate an untrusted pool of workers actually look a little bit more like a blockchain a little bit, because instead of blocks, you have commits. And these commits can build on each other and they contain like changes to the code as you're improving it. And the proof of work is basically doing tons of experimentation to find the commits that work. And
that's hard. And then the reward is just being on the leaderboard right now. There's
no monetary reward whatsoever. But I don't want to push the analogy too far, but it fundamentally has this issue where a huge amount of search goes into it, but it's very cheap to verify that a candidate solution is indeed good because you can just train a single, you know, someone had to try 10,000 ideas, you just have to check that the thing that they produced actually works because the 99,000 of them
didn't work, you know? And so basically, long story short, it's like you have to come up with a system where an untrusted pool of workers can collaborate with a trusted pool of workers that do the verification. And the whole thing is kind of like asynchronous and works and so on. And it's like safe from a security perspective because if anyone sends you arbitrary code and you're gonna run it, that's very
sketchy and dodgy. fundamentally it should be totally possible so you're familiar with projects like seti at home and folding at home all of these problems have a similar kind of setup so folding at home you're folding a protein um and it's very hard to find a configuration that is low energy but if someone finds a configuration that they evaluate to be low energy that's perfect you can just use it you can
easily verify it so a lot of things have this property that you know very expensive to come up with but very cheap to verify and so in all those cases things like folding at home or seti at home or auto research at home will be good fits and so um long story short a swarm of agents on the internet could collaborate to improve LLMs and could potentially even run circles around Frontier
Labs. Who knows? Maybe that's even possible. Frontier Labs have a huge amount
Labs. Who knows? Maybe that's even possible. Frontier Labs have a huge amount of trusted compute, but the Earth is much bigger and has a huge amount of untrusted compute. But if you put systems in place that deal with this, then
untrusted compute. But if you put systems in place that deal with this, then maybe it is possible that the swarm out there could come up with better with better solutions. And people kind of like contribute cycles to a thing that they care
better solutions. And people kind of like contribute cycles to a thing that they care about. And so, sorry, so the last thought is lots of companies or whatnot, they
about. And so, sorry, so the last thought is lots of companies or whatnot, they could maybe have like their own things that they care about. And you, if you have compute capacity, you could contribute to different kind of auto research tracks. Like maybe
you care about certain, you know, like you care about like cancer or something like that of certain type. You don't have to just donate money to an institution. You
actually could like purchase compute and then you could join the auto research forum for that project, you know? So, If everything is rebundled into auto-researchers, then compute becomes the thing that you're contributing to the pool. Yeah, that's very inspiring. And it's also interesting.
Like, I don't know how far this goes, but it is interesting that at least some audience of people, you know, here in Silicon Valley or lining up at, you know, retail stores in China have discovered that, like, having access to personal compute is interesting again. Yeah. Right? So maybe they're really motivated to do that for their claws,
interesting again. Yeah. Right? So maybe they're really motivated to do that for their claws, and then they can contribute to auto-research. It's almost like dollars, the thing everyone cares about, But is flop the thing that actually everyone cares about in the future? Like,
is there going to be like a flippening almost of like what's the thing that you care about? Like right now, for example, it's really hard to get compute even if you have money. Yeah. So actually, it almost seems like the flop is like dominant in a certain sense. Yeah. So so maybe that's kind of like kind of like that. Like how much how many flops do you control instead of
like what wealth do you control? I don't actually think that's true, but it's kind of interesting to think about. The last thing you released was like a little bit of jobs data analysis. Is that right? what um and might touch a nerve even though you're just like visualizing some public data uh what was you know what were you curious about yeah i guess i was curious too um i mean everyone is
like it's everyone's really thinking about the impacts of ai on the job market and what's going to look like so i was just interested to take a look like what does the job market look like where are the different roles um and how many people are in different professions and i was like really just interested to like look through the individual cases and tried to think myself about like you know with
these AIs and how they're likely to evolve like are these going to be tools that people are using are these going to be displacing tools for these uh professions and like what are the current professions and how are they going to change are they going to grow or adjust to a large extent or like what could be new professions so it's really just like a way to fuel my own chain of
thought about the industry i suppose um and so Yeah, the jobs data basically is just a Bureau of Labor Statistics. They actually have a percent outlook for each profession about how much it's expected to grow over the next, I think almost decade. Yeah,
I think it's a decade, but it was made in 2024. We need a lot of healthcare workers. So they've already made those projections and I'm not sure actually 100% what the methodology was that they put into the projections. I guess I was interested to color things by, like if people think that what's like primarily being developed now is this kind of like more digital AI, that it's kind of like almost like
these ghosts or spirit entities that can like interact in the digital world and manipulate a lot of like digital information. And they currently don't really have a physical embodiment or presence. And the physical stuff is probably gonna go slightly slower because you're manipulating
or presence. And the physical stuff is probably gonna go slightly slower because you're manipulating atoms. So flipping bits and the ability to copy paste digital information is like, makes everything a million times faster than accelerating matter, you know? So energetically, I just think we're gonna see a huge amount of activity in digital space, huge amount of rewriting, huge amount of activity, boiling soup and i think the we're going to see something
that in the digital space goes at the speed of light compared to i think what's going to happen in the physical world to some extent it would be the extrapolation and so i think like there's currently kind of like i think an overhang where there can be like a lot of unhobbling almost potentially of like a lot of digital information processing that used to be done by computers and people and now
with ais as like a third kind of manipulator of digital information there's going to be a lot of refactoring in those in those uh disciplines um But the physical world is actually going to be like, I think, behind that by some amount of time. And so I think what's really fascinating to me is like, so that's why
time. And so I think what's really fascinating to me is like, so that's why I was highlighting the professions that fundamentally manipulate digital information. This is work you could do from your home, etc. Because I feel like those will be, like things will change. And it doesn't mean that there's going to be less of those jobs or
change. And it doesn't mean that there's going to be less of those jobs or more of those jobs, because that has to do with like demand elasticity and many other factors. But things will change in these professions because of these new tools and,
other factors. But things will change in these professions because of these new tools and, um, because of this upgrade to the nervous system of the human superorganism, if you want to think about it that way. Given the look you had at the data, do you have either any observations or guidance for people facing the job market or thinking about what to study now or what skills to
develop? I mean, we can all go get like, I'm very thankful that I have
develop? I mean, we can all go get like, I'm very thankful that I have to like meet people for my job right now. Yeah. More physical. Could you do your work from home though? I could. I think there are relationship parts of it that are hard, but most of it I could. Yeah. think it's really hard to tell because again like the job market is extremely diverse and i think the answers
will probably vary but uh to a large extent like these tools are extremely new extremely powerful and so just being you know just trying to keep up with it is like the first thing um and um yeah because i think a lot of people kind of like dismiss it or or they're afraid of it or they're afraid of it etc which is totally understandable of course yeah i think like um it's
fundamentally an empowering tool at the moment And these jobs are bundles of tasks and some of these tasks can go a lot faster. And so people should think of it as primarily a tool that it is right now. And I think the long-term future of that is uncertain. Yeah, it's kind of really hard to forecast, to be honest. And like, I'm not professionally like doing that really. And I think there's a
honest. And like, I'm not professionally like doing that really. And I think there's a job of like economists to do properly. You are an engineer though. And like, one thing I thought was interesting is that like the demand for engineering jobs is continuing to increase. Yeah. I can't tell if that's a good temporary phenomenon. I'm not sure
to increase. Yeah. I can't tell if that's a good temporary phenomenon. I'm not sure how I feel about it yet. Do you know? Yeah, that's like the demand-end elasticity almost. Like, software was scarce, right? And so the reason we don't have more demand
almost. Like, software was scarce, right? And so the reason we don't have more demand for software is just that there's scarcity and it's too expensive. It's too expensive, yeah.
So if the barrier comes down, then actually you have the Jevons paradox, which is like, you know, actually the demand for software actually goes up. It's cheaper and there's more, more, more powerful. The classical example of this always is the ATMs and the bank tellers, because there was a lot of like fear that ATMs and computers basically, would displace tellers. But what happened is they made like the cost of operation of
a bank branch much cheaper as there were more bank branches. So there were more tellers is like the canonical example people cite. But basically it's just Jevons paradox, like something becomes cheaper. So there's a lot of unlocked demand for it. So I do think that that's probably, I do have like cautiously optimistic view of this in software engineering, where I do, it does seem to me like the demand for software will
be extremely large. and it's just become a lot cheaper. And so I do think that for quite some time, it's very hard to forecast, but it does seem to me like right now, at least locally, there's going to be more demand for software because software is amazing. It's like, you know, digital information processing. You're not forced to use like arbitrary tools that were given to you. There are imperfect in various ways.
You're not forced to subscribe to what exists. Code is now ephemeral and it can change and it can be modified. And so I think there's going to be a lot of activity in the digital space to like rewire everything in a certain sense.
And I think it's going to create a lot of demand for, for this kind of stuff. I think long-term, um, yeah, obviously even with other research, like open AI
of stuff. I think long-term, um, yeah, obviously even with other research, like open AI or, or, you know, uh, Anthropic or these other labs, like they're employing what, like a thousand something researchers, right? These researchers are basically like glorified auto, like, you know, they're like automating themselves away, like actively. And this is like the thing they're all trying to do. yeah i think like i went around um some of those researchers
also fear that feel the psychosis right because they can it's working yeah right and so they're like oh it's over for me too i did spend a bunch of time going around opening i and i was like you guys realize if we're successful like we're all out of jobs like like it's just we're just building automation for sam or something like that like i or the board i'm not sure but like
uh there's just feeling like this automation for the board or the ceo or something like that and we're all out of our job and maybe um contributing on the sides and so yeah it's kind of like uh nerving from that perspective is it okay if i ask you gnome's question you know you could be doing that right auto researching with a lot of compute scale and a bunch of colleagues at one
of the frontier labs like why not well i was there for a while right like and i did re-enter so to some extent i agree and i think that there are many ways to slice this question it's a very loaded question a little bit um i will say that i feel very good about like what people can contribute and their impact outside of the frontier labs, obviously. Not in the industry, but
also in more ecosystem level roles. So your role, for example, is more like ecosystem level. My role currently is also kind of more on ecosystem level. And I feel
level. My role currently is also kind of more on ecosystem level. And I feel very good about impact that people can have in those kinds of roles. I think
conversely, there are definite problems in my mind for basically aligning yourself way too much with the frontier labs too. So fundamentally, I mean, you have a huge financial incentive to uh with these frontier labs and by your own admission the uh the ais are going to like really change humanity and society in very dramatic ways and here you are basically like building the technology and benefiting from it like and being
like very allied to it through financial means like this was a conundrum that was in um at the heart of you know how opening i was started in the beginning like this was the conundrum that we were trying to solve um and so you know that so it's kind of um it's still not resolved the conundrum is still not like fully resolved so that's number one You're not a completely free agent
and you can't actually be part of that conversation in a fully autonomous, free way.
Like if you're inside one of the frontier labs, there are some things that you can't say. And conversely, there are certain things that the organization wants you to say.
can't say. And conversely, there are certain things that the organization wants you to say.
And they're not going to twist your arm, but you feel the pressure of what you should be saying. Right. Obviously, otherwise it's like really awkward conversations, strange side eyes, like, what are you doing? You know, so you can't like really be an independent agent. And I feel like a bit more aligned with humanity in a certain sense outside of the frontier lab, because I'm not subject to those
pressures almost, right? And I can't say whatever I want. Yeah, I would say in the frontier labs, like, You can have impact there, of course, as well. But
there's many researchers and maybe you're one of them. Maybe your ideas are really good, et cetera. And maybe there's a lot of decision making to do and you want
et cetera. And maybe there's a lot of decision making to do and you want to be in a position where you are in the room with those conversations when they come up. I do think that currently the stakes are overall fairly low. And
so everything is kind of nice. But ultimately, at the end of the day, when the stakes are really high, et cetera, if you're an employee at an organization, I don't actually know how much sway you're going to have on your organization, what it's going to do. Fundamentally, at the end of the day, you're not really in charge.
You're in a room and you're contributing ideas, but you're not really in charge of that entity that you're a part of. So those are some sources of misalignment, I think, to some extent. I will say that in one way, I do agree a lot with that sentiment that I do feel like the labs, for better or worse, they're opaque and a lot of work is there. And they're kind of at the
edge of capability and what's possible. And they're working on what's coming down the line.
And I think if you're outside of that frontier lab, your judgment fundamentally will start to drift because you're not part of the what's coming down the line. And so
I feel like my judgment will inevitably start to drift as well. And I won't actually have an understanding of how these systems actually work under the hood. That's an
opaque system. I won't have a good understanding of how it's going to develop and et cetera. And so I do think that in that sense, I agree and something
et cetera. And so I do think that in that sense, I agree and something I'm nervous about. I think it's worth basically being in touch with what's actually happening and actually being in the frontier lab. And if some of the frontier labs would have me come for some amount of time and do really good work for them and then maybe come in and out. Guys who's looking for a job, this is
super exciting. Then I think that's maybe a good setup because I kind of feel
super exciting. Then I think that's maybe a good setup because I kind of feel like it kind of, you know, Maybe that's like one way to actually be connected to what's actually happening, but also not feel like you're necessarily fully controlled by those entities. So I think, honestly, in my mind, like, Noam can probably do extremely
those entities. So I think, honestly, in my mind, like, Noam can probably do extremely good work at OAI, but also I think his most impactful work could very well be outside of OpenAI. Noam, that's a call to be an independent researcher with auto research. Yeah, there's many things to do on the outside. And I think ultimately, I
research. Yeah, there's many things to do on the outside. And I think ultimately, I think the ideal solution maybe is like Yeah, going back and forth or, yeah, and I think fundamentally you can have really amazing impact in both places. So very complicated, I don't know, like it's a very loaded question a little bit, but I mean, I joined the Frontier Lab and now I'm outside and then maybe in the future
I'll want to join again. And I think that's kind of like how I look at it. One question related to what visibility does the world or the AI
at it. One question related to what visibility does the world or the AI ecosystem have into the Frontier is like how close open source is.
to the frontier and how sustainable that is. I think it is quite surprising, the entire sequence of events actually from like having a handful of Chinese models and global models, and I think people are going to continue releasing here in the near term that are closer than much of the industry anticipated from a capability perspective.
I don't know if you're surprised by that, but you're a long term contributor to open source. Like what's your prediction here? Yeah, so roughly speaking, basically, the closed models
open source. Like what's your prediction here? Yeah, so roughly speaking, basically, the closed models are ahead, but people are monitoring the number of months that open source models are behind. And it started with, there's nothing, and then it went to 18 months. Yeah,
behind. And it started with, there's nothing, and then it went to 18 months. Yeah,
and there's been a convergence, right? So maybe they're behind by, what is the latest, maybe eight months, six months, eight months kind of thing right now? Yeah, I'm a huge fan of open source, obviously. So for example, in operating systems, you have closed, like Windows and Mac OS. These are large software projects, kind of like what LLMs are going to become. And there's Linux. But Linux is very easy. like actually links
is extremely successful project it runs on the vast majority of computers like last time i checked was it like 60 or something like run linux um and that's because there is a need in the industry to have a common open platform that everyone feels uh sort of safe using i would say like the industry has always felt a demand for that kind of a project to exist and i think the same
is true now and that's why businesses actually want there's demand for this kind of a um a thing to exist the big difference is that everything is capital uh there's a lot of that goes into this um So I think that's where things fall apart a little bit, make it a bit harder to compete in certain sense.
I do think that the current models are very good. The other thing that I think is really interesting is that for the vast majority of consumer use cases and things like that, even the term open source models are actually quite good, I would say. And I think if you go forward more years, it does seem to me
say. And I think if you go forward more years, it does seem to me like a huge amount of simple use cases are going to be well covered and actually even run locally. But there's going to be always like some demand for like frontier intelligence and that that can actually be extremely large piece of the pie. But
it could be that the frontier, the need for frontier intelligence is going to be like, you know, Nobel Prize kind of work or like let's move Linux from C to Rust. There's going to be like bigger projects, you know, like scoped in that
to Rust. There's going to be like bigger projects, you know, like scoped in that kind of a way. And there's going to be maybe more. And maybe that's where a lot of the frontier closed intelligences are going to be interacting with. And open
source is kind of like going to eat through a lot of the more basic use cases or something like that. you know, at some point, what is Frontier today is going to be, you know, probably later this year, what's Frontier today in terms of what I'm using right now from the closed labs might be open source. And
that's going to be doing a lot of work. So I kind of expect that this dynamic will actually basically continue. Like we'll have Frontier labs that have closed AIs that are kind of like these oracles, and then we'll have open source kind of like behind by some amount of months. And I kind of expect that to continue.
And I actually think that's like a pretty, pretty good setup overall. because i i'm a little bit hesitant of having um i don't actually think it's like structurally i think there's some systemic risk attached to just having intelligences that are closed and that's like that's it and i think that that's a you know centralization has a very poor track record in my view uh in the past and has you mean like
in political or economic systems in general yes exactly i think there's like a lot of like a lot of pretty bad precedent so i want there to be a thing that is maybe not at the edge of capability because it's new and unexplored etc but I want there to be a thing that's behind and that uh it's kind of like a common working space for intelligences that the entire industry has access
to yeah that seems to me like a pretty decent power balance for the industry yeah I was thinking there's just like there are many problems to solve right like if you keep advancing intelligence from the frontier we can do new things and there are a lot of like very big problems for humanity right yeah and so like It seems that that will continue to be a very expensive game. And so I
want to like root for labs that are doing that because there are problems we cannot solve without continuing to advance the models in a very expensive way. And yet,
as you point out, like if what we have today as Frontier is open, that's a lot of capability. Right. And so I think, you know, the power of that or the democratization of that seems like very useful and also healthy. Yeah. I think
basically by accident, we're actually like in an okay spot. An optimal spot. Yeah. By
accident, we happen to be in a good spot in a certain sense. Well, and
to some degree, the longer this endures, like this dynamic, the healthier of a spot like the ecosystem might be in. Yeah. Right. Because you have more and more area under the curve. And I will say that even on the closer side, I almost feel like it's been like even further centralizing recently because I think a lot of the front runners are like not necessarily like the top tier. And so yeah,
like in that sense, I think it's, um, it's not super ideal. I would love there to be more, more frontier labs because yeah, I'm like by default, very suspicious of like, um, I want there to be more people in the room. I want,
I think like in machine learning ensembles always are perform any individual model. And so
I want there to be ensembles of people thinking about all the hardest problems. And I want there to be ensembles of people in a room when they, um, to be all well-informed and to make all those decisions, you know? So, uh, I don't want it to be like a closed doors with two people or three people. I
feel like that's like, not a good, not a good feature. I almost wish there were more labs, long story short. And I do think that open source has a place to play. I hope it sticks around. And I basically, it's currently slightly behind and that's actually kind of like a good thing. Okay. You worked on the precursor to generalized robotics, autonomy in cars, right? a lot has
happened in the last couple months with robotics companies as well, like acceleration of really impressive generalization of environment, of tasks, like increasing long horizon tasks, lots of money going into the space. Like, is it going to happen? Has anything in your view changed recently? So like my view is kind of informed by what I saw in self-driving.
recently? So like my view is kind of informed by what I saw in self-driving.
And I do feel like self-driving is the first robotics application. So probably what I saw is at the time, like 10 years ago, there were a large number of startups. And I kind of feel like, um, like most of them basically like didn't
startups. And I kind of feel like, um, like most of them basically like didn't long-term make it. And what I saw is that like a lot of capital expenditure had to go in and a lot of time. And so I think it's like, I think robotics, because it's so difficult and so messy and requires a huge amount of capital investment and a lot of like conviction, It's a big problem, and I
think items are really hard. So I feel like it will lag behind what's going to happen in digital space. And in digital space, there's going to be a huge amount of unhobbling. Basically, things that weren't super efficient becoming a lot more efficient by a factor of 100 because bits are so much easier. And so I think currently, in terms of what's going to change and where the activity is, I feel like
digital space is going to change a huge amount. and then the physical space will lag behind. And what I find very interesting is this interface in between them as
lag behind. And what I find very interesting is this interface in between them as well. Because I think in this, if we do have more agents acting on behalf
well. Because I think in this, if we do have more agents acting on behalf of humans and more agents kind of like talking to each other and doing tasks and participating in the kind of economy of agents, et cetera, you're going to run out of things that you're going to do purely in a digital space. At some
point, you have to go to the universe and you have to ask it questions.
You have to run an experiment and see what the universe tells you to get back to learn something. And so we currently have a huge amount of digital work because there's an overhang in how much we collectively thought about what already is digital.
So we just didn't have enough thinking cycles among the humans to think about all the information that is already digital and already uploaded. And so we're gonna start running out of stuff that is actually already uploaded. So you're gonna at some point read all the papers and process them and have some ideas about what to try. But
yeah, we're just gonna... I don't actually know how much you can get intelligence that's fully closed off and was just information that's available to it. And so I think what's going to happen is first there's going to be a huge amount of unhobbling and I think there's a huge amount of work there. Then actually it's going to move to like the interfaces between physical and digital. So, and that's like sensors of
like seeing the world and actuators of like doing something to the world. So I
think a lot of interesting companies will actually come from that interface of like, can we feed the super intelligence in a certain sense, uh, data, and can we actually like take data out and manipulate the physical world, um, per its bidding, if you want to like, interpromorphize the whole thing. Right. And then the physical world, actually, I almost feel like the total addressable market, et cetera, in terms of like the amount
of work and so on is massive, possibly even much larger, maybe what can happen in digital space. So I actually think it's like a much bigger opportunity as well.
But I do feel like it's a huge amount of work. And in my mind, the atoms are just like a million times harder. So it will lag behind, but it's also, I think, a little bit of a bigger market. So it's kind of like... I think the opportunities kind of like follow that kind of trajectory. So right
like... I think the opportunities kind of like follow that kind of trajectory. So right
now, this digital is like my main interest. Then interfaces would be like after that.
And then maybe like some of the physical things, like their time will come and they'll be huge when they do come. Well, it's an interesting framework for it, too, because certain things, not the things I'm working on right now, but certain things are much easier even in the world of atoms, right? Like, if you just think about, like, read and write to the physical world, like, read, sensors, cameras, like, there's a
lot of existing hardware. And you can imagine, like, enriching agent capabilities or capturing a lot of new data if you're just clever about it. And, like, you don't necessarily have to invest a lot to, like, get something valuable. Yeah. Yeah. So like examples of this that I saw, for example, are, you know, a friend of mine, Liam is a CEO of Periodic. I visited them last week. So it's just on top
of mind. Like they're trying to do auto research for material science. And so in
of mind. Like they're trying to do auto research for material science. And so in that case, it's like the sensors to the intelligence are actually like pretty expensive lab equipment. And the same is true in biology. I think a lot of people are
equipment. And the same is true in biology. I think a lot of people are very interested in engineering biology. And, you know, the sensors will be more than just like video cameras, if that makes sense. And then the other thing I saw, for example, is companies that are trying to have like you basically pay people for training data, as an example. Yeah, programmatically. Yeah, to feed the Borg.
And so these are all examples of sensors in a certain sense. So they take many diverse shapes and forms, if that makes sense. Yeah, so I'm looking forward to the point where I can ask for a task in the physical world and I can put a price on it and just tell the agent, like, you know, you figure out how to do it. Go get the data. I'm actually kind of surprised
we don't have enough like information markets. Like if, for example, a poly market or other betting markets or even stocks, et cetera, if they have so much autonomous activity and rising amount of activity, like why should, like, for example, if Iran was just happening now, like how come there isn't a process where like taking a photo or a video from somewhere in Tehran should cost like 10 bucks? like someone should be
able to pay for that, you know, like, and that's an example of like feeding the intelligence. There's not going to be a human looking at it. It's going to
the intelligence. There's not going to be a human looking at it. It's going to be like agents who are trying to guess the betting games and stock markets and so on. So I kind of feel like the agentic web is still like fairly
so on. So I kind of feel like the agentic web is still like fairly new that there's no like mechanisms for this, but this is an example of what I think might happen. There's a good book that maybe is inspiring called Daemon. You
potentially read it. In Daemon, the intelligence, um, ends up puppeteering almost a little bit, like humanity in a certain sense. And so humans are kind of like its actuators, but humans are also like its sensors. And so I think collectively, society will kind of reshape in a certain way to serve that kind of a, that will kind of end up happening collectively across the industry where, Yeah, there's just a lot more
automation and has certain needs and kind of humans will be serving those needs of that machine, not necessarily like to each other. But we were on this very specific point of like missing pieces of training data. We needed something like auto research, right?
Like we need the training cycle or the SFT piece to be far more mechanized. For which part? In order to make the... election, like in
mechanized. For which part? In order to make the... election, like in order to take the human out of the loop to ask for a task that is just like improve my model quality with new data, right? Yes. Does that make sense to you? Like we, if you can't have the model do the training runs
by itself, then your ability to do this as a like closed loop task with by pricing data is more challenged. Yes, yes, 100%. Yeah. But now we do. The
thing is for LLM training, it actually is like very easily, it like really fits the paradigm. So you'd actually expect... Yeah, clean metric. Yeah, like LLM training actually fits
the paradigm. So you'd actually expect... Yeah, clean metric. Yeah, like LLM training actually fits the paradigm really well, really easily, like all the optimization of all the code and so it runs faster. And then you also have like metrics that you can optimize against. I do think that if you had an autonomous loop over those metrics, there's
against. I do think that if you had an autonomous loop over those metrics, there's gonna be a lot of like good harding going on where the system will like overfit to those metrics. And so, but then you can use the system to devise more metrics and you just have really good coverage. So it's kind of hard to tell, but... in a certain sense is like a pretty pretty good fit i want
tell, but... in a certain sense is like a pretty pretty good fit i want to talk about a little uh tiny side project you have before we end um tell me about the micro gpt oh yeah okay so micro gpt so i have this like running obsession of like maybe a decade or two of just like simplifying and boiling down basically LLMs to like their bare essence and I've had a number
of projects along these lines so like Nano GPT and make more and micro GP micro grad etc so I feel like micro GPT is now the state of the art of me trying to like just boil it down to just the essence because the thing is like training neural nets and LLMs specifically is a huge amount of code but all of that code is actually complexity from efficiency It's just because you
need it to go fast. If you don't need it to go fast and you just care about the algorithm, then that algorithm actually is 200 lines of Python. Very
simple to read. And this includes comments and everything. Because you just have your data set, which is a text. And you need your neural network architecture, which is like 50 lines. You need to do your forward pass. And then you have to do
50 lines. You need to do your forward pass. And then you have to do your backward pass to calculate the gradients. And so an old Autograd engine to calculate the gradients is like 100 lines. And then you need an optimizer, an atom, for example, which is a very state-of-the-art optimizer. It's like, again, 10 lines, really. so putting
everything together in the training loop is like yeah 200 lines and it was interesting to me like normally before like maybe a year ago or more if i had come up with micro gpt i would be tempted to basically explain to people like i have a video like stepping through it or something like that and actually try to make that video a little bit and i try to make like a little
guide to it and so on i kind of realized that this is is not really It's not really adding too much because people, because it's already so simple that it's 200 lines that anyone could ask their agent to explain it in various ways.
And the agents, like I'm not explaining to people anymore. I'm explaining it to agents.
If you can explain it to agents, then agents can be the router and they can actually target it to the human in their language with infinite, you know, the patients and just at their capability and so on. Right. If I don't understand this particular function, I can ask the agent to explain it to me like three different ways. Yeah. And I'm not going to get that from you. Exactly. And so I
ways. Yeah. And I'm not going to get that from you. Exactly. And so I kind of feel like, you know, what is education? Like it used to be guides.
It used to be lectures. It used to be this thing. But I feel like now more I'm explaining things to agents and maybe I'm coming up with skills where like, um, So basically skill is just a way to instruct the agent how to teach the thing. So maybe I could have a skill for micro GPT of the progression I imagine the agent should take you through if you're interested
in understanding the code base. And it's just like hints to the model to like, oh, first start off with this and then with that. And so I could just script the curriculum a little bit as a skill. So I don't feel like, yeah, I feel like there's gonna be less of like explaining things directly to people. And
it's gonna be more of just like, does the agent get it? And if the agent gets it, they'll do the explanation. And we're not fully there yet because I still think I can probably explain things a little bit better than the agents, but I still feel like the models are improving so rapidly that I feel like it's a losing battle to some extent. And so I think education is going to be
kind of like reshuffled by this quite substantially, where it's the end of like teaching each other things almost a little bit. Like if I have a library, for example, of code or something like that. It used to be that you have documentation for other people who are going to use your library, but you shouldn't do that anymore.
You should have, instead of HTML documents for humans, you have Markdown documents for agents, because if agents get it, then they can just explain all the different parts of it. So it's this redirection through agents, you know? And that's why. So I think
it. So it's this redirection through agents, you know? And that's why. So I think we're going to see a lot more of that. playing out we'll see if the great teachers know like to develop intuition for how to explain things to agents differently ultimately so for example micro gpt like i asked i tried to get an agent to write micro gpt so i told it like try to boil down the simplest
things like try to build down my um neural network stream to the simplest thing and can't do it like micro gpt is like my is it's like my end of my obsession it's the 200 lies thought about this for a long time. I've
obsessed about this for a long time. This is the solution. Trust me, it can't get simpler. And this is my value add. Everything else, like agent gets it. It
get simpler. And this is my value add. Everything else, like agent gets it. It
just can't come up with it, but it totally gets it and understands why it's done in a certain way, etc. So like my contribution is kind of like these few bits, but everything else in terms of like the education that goes on after that is like not my domain anymore. So maybe Yeah, it's like education kind of changes in those ways where you kind of have to infuse the few bits that
you feel strongly about the curriculum or the best, the better way of explaining it or something like that. The things that agents can't do is your job now. The
things that agents can do, they can probably do better than you or like very soon. And so you should be strategic about what you're actually spending time on. Well,
soon. And so you should be strategic about what you're actually spending time on. Well,
we appreciate the few bits. Thank you, Andre. Okay. Find us
on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you
faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Loading video analysis...