LongCut logo

From Data Centers to Dyson Spheres: P-1 AI's Path to Hardware Engineering AGI

By Sequoia Capital

Summary

## Key takeaways - **Training Data Bottleneck in Physical Engineering**: There haven't been millions of airplanes designed since the Wright brothers, only about a thousand, nowhere near enough to train a large model, even if all were accessible and semantically integrated. [00:27], [04:45] - **Archie: Cognitive Automation Agent**: Archie automates what human engineers do: distilling requirements into key design drivers, postulating solutions, first-order sizing with relevant physics phenomenology, and knowing how to use existing detailed design tools without replacing them. [07:30], [08:32] - **Federated Models for Engineering Primitives**: Engineering tasks reduce to primitive operations like design evaluation, synthesis, and error infilling, executed by a federated assembly of models including graph neural networks, geometric reasoners, and a lobotomized LLM for multifphysics reasoning, orchestrated by a reasoner LLM. [09:20], [11:42] - **First Market: Data Center Cooling Customization**: Starting with data center cooling systems, order of magnitude more complex than residential units with about a thousand unique parts, to deliver semi-custom solutions limited by engineering bandwidth amid surging AI demand. [14:53], [15:28] - **Engineering AGI as Reflective Pinnacle**: Engineering AGI is reflection: self-awareness of the process used for recall, understanding, evaluation, error correction, synthesis, recognizing limitations, and alternative processes—reserved for senior experts. [19:37], [20:12] - **Roadmap: Order of Magnitude Yearly**: Progress by scaling synthetic training data complexity an order of magnitude yearly: data center cooling (1K parts) to industrial systems, mobility, then aerospace (1M parts). [16:02], [16:46]

Topics Covered

  • Physical AI Lags Software Due to Data Scarcity
  • Generate Physics-Based Synthetic Data
  • Archie Automates Human Engineer Cognition
  • Roadmap Scales Complexity Yearly
  • Engineering AGI Requires Self-Reflection

Full Transcript

Again, when I was asking the question over the last couple years of like why isn't anybody working on AI for building the physical world, the answer was training data, right? Fundamentally, if

you want an AI engineer that can help you design an airplane or or modify an airplane and you say, "Hey, what happens if I change the wing on an A320 by 10%, increase the wing area by 10%."

Uh, in order to be able to answer that, your model has to be trained on millions of airplane designs ideally. And there

just haven't been millions of airplanes designed since the Wright brothers, even if you did magically have access to uh to all of them, which you don't. And if

they were all modeled in a co coherent um sort of semantically integrated way, which they aren't, right? But even

hypothetically, you would be you would have maybe a thousand designs, right?

since since the birth of aviation. And

so nowhere near enough to train a large

[Music] model. Today we're excited to welcome

model. Today we're excited to welcome Paul Arnango, CEO of P1 AI. Paul was a director at DARPA and the youngest CTO of Airbus at age 35. And now he's getting to turn his science fiction

dreams into reality at P1AI. P1AI is attempting to build

P1AI. P1AI is attempting to build engineering AGI for the physical world.

So we already have fantastic companies like Enthropic Cursor and Devon that are transforming software engineering. But

hardware engineering in the physical world, whether it's data center coolers or airplanes, has yet to be transformed radically by AI. We talked to Paul about the opportunity, the key bottlenecks in

gathering data, and how he envisions their agent Archie evolving to help build the physical world around us from fighter jets to starships. Paul, thank you so much for

starships. Paul, thank you so much for joining us today and we're delighted to have both you and your Jack Russell Terrier Beagle mix, Lee, on the show.

Welcome. Uh let's start off with we just had our AI conference AI ascent and you know at at the conference Jeff Dean was talking about you know the the potential

for vibe coding and how you know a 247 junior software engineer is going to be possible through AI within the next year or so. So it seems like software

or so. So it seems like software engineering is really going through this vertical takeoff moment right now. What

do you think is happening in the physical world as it pertains to physical engineering?

So not a lot is the is the short answer.

Um, and one of the reasons we founded P1AI is because, you know, I grew up on hard sci-fi and I was promised AI that would help us build, you know, the

physical world, the world around us and eventually starships and Dyson spheres.

And when the kind of deep learning revolution really started to take off, I asked the question of, well, who's building this stuff? Like, who is doing that AI that's going to help us build the physical world? And the answer was

nobody was working on it. and it really wasn't even on the agenda of the of the kind of foundation labs. Um, and some years later today, 2025, it still isn't,

right? And so, uh, we ask the question

right? And so, uh, we ask the question of why that is. Um, we can talk about why that why that is, uh, maybe later in the podcast. And we think we have a

the podcast. And we think we have a solution to remedying some of the reasons, some of the challenges and and actually bringing it to market. So I

think um and Jeff by the way is is we're we're very grateful to have him as an angel investor in the company and I think the uh you know coding AI is has

been a long time coming. uh one of my co-founders Sushma Jad did his PhD in 2011 on program synthesis right so this is not a new technology but it's just now I think finding that product market

fit right the right packaging and the right uh business model uh the right pricing models uh I think physical AI uh we we have the benefit of standing on

the shoulders of a lot of the a lot of the coding AI work so if you can have a programmatic representation of your physical system you can use some of the program synthesis type techniques um to

to create physical designs. So, we're

not, you know, it's not going to take a decade or 15 years. Um, we think that we can put the technology bricks together this year and hopefully start finding product market fit uh as early as next year. Can we Yeah. Can we double click

year. Can we Yeah. Can we double click on that a little bit? What are those technology bricks? What pieces need to

technology bricks? What pieces need to be in place for this to become a reality? Yeah. So, so the biggest one,

reality? Yeah. So, so the biggest one, right? And and again when I was asking

right? And and again when I was asking the question over the last couple years of like why isn't anybody working on AI for building the physical world the answer was training data right fundamentally if you want an AI engineer

that can help you design an airplane or or modify an airplane and you say hey what happens if I change the wing on an A320 by 10% increase the wing area by

10%. Uh, in order to be able to answer

10%. Uh, in order to be able to answer that, your model has to be trained on millions of airplane designs ideally.

And there just haven't been millions of airplanes designed since the Wright brothers, even if you did magically have access to uh to all of them, which you don't. And if they were all modeled in a

don't. And if they were all modeled in a co coherent um sort of semantically integrated way, which they aren't, right? But even hypothetically, you

right? But even hypothetically, you would be you would have maybe a thousand designs, right? since since the birth of

designs, right? since since the birth of aviation. And so nowhere near enough to

aviation. And so nowhere near enough to train a large model. And so so the the most sort of foundational technology brick for us is is creating this uh

training data set. It is synthetic um that is physics-based and supply chain informed of hypothetical designs of of in in whatever physical product domain.

So it could be airplanes, could be something else. Um and uh uh making it

something else. Um and uh uh making it large enough um and making it interesting enough So the the design space for most physical products is almost infinitely large, right? Like

it's it's huge. And so you can't randomly sample it. You can't evenly sample it. You have to very cleverly

sample it. You have to very cleverly sample it. You want to sample kind of

sample it. You want to sample kind of densely around dominant designs, but you want to sample sparsely around the corners and edges of the design space.

Um because that teaches you something.

Even if that corner or edge of the design space is not somewhere where you would ever want to go, um it teaches your model something about why that is, right? Mhm. And so so creating these uh

right? Mhm. And so so creating these uh these data sets for for training models that's that was sort of the the core of uh of our approach. Then of course if

you just take if you now have a million airplane designs and and a performance vector for each one and you throw an LLM in post-training or or even in pre-training um you're not going to

magically get a good engineer, right? So

so then there is the question of what what does the model architecture look like? Um, and today we use a federated

like? Um, and today we use a federated approach of a bunch of different models and we can talk we can talk more about about them um that do different parts of engineering reasoning um and then

they're all orchestrated by kind of an orchestrator reasoner LLM that also acts as the interface to the to the to the user actually. Can you say more about

user actually. Can you say more about that? How do you get your models to be

that? How do you get your models to be capable of doing the physics-based reasoning? And is this stuff done in

reasoning? And is this stuff done in kind of design software today? Is this

stuff inside a you know a engineer's brain? and and how do you kind of put

brain? and and how do you kind of put that knowledge into a into a model and can I add to that the supply chain informed piece of the equation? How does

all that come into play? Sure.

Absolutely. So um so first let me maybe describe what the product actually is, right? Because I think that that'll help

right? Because I think that that'll help answer part of the question. Um so we are focused very narrowly in some ways on cognitive automation of what a human

engineer does in designing physical systems. And so what does a human engineer do? So humans are very good at

engineer do? So humans are very good at taking a bunch of requirements and distilling what are the key design drivers that come out of those requirements. postulating one or more

requirements. postulating one or more possible solutions that meet those design drivers doing first order sizing of what is the what does the answer look like roughly right and what is the

relevant phenomenology in that in doing that sizing and by phenomenology I mean like what are the different physics because it's not just about geometry right uh these are multifysics systems so they have electrical and thermal and

vibrations and and electromagnetic interference and and sometimes those matter sometimes they don't right and humans are very good at good engineers right? Are very good at selecting which

right? Are very good at selecting which modalities matter in doing this first order sizing and is this really going to close and is this really going to be a viable design and then humans are very good at knowing what tools there are for

detail design and analysis. What is the range of applicability of those tools and how do you use them? How do you set up the problem for those tools? And

that's exactly what we're trying to tackle is that cognitive automation. Um

so the first product is called Archie.

So I'll if I refer to Archie that's not that's not Lee. Archie is the agent. Um,

and a really important consequence of this focus on cognitive automation is that we are not trying to play at the tools layer. There are existing detailed

tools layer. There are existing detailed design and analysis and simulation tools and we want Archie to know how to use those tools the same way that a human knows how to use them. But we don't try to replace the tool. We don't try to

make it better. We don't try to compete with it. We don't try to supplant it in

with it. We don't try to supplant it in any way. Right. We just learned that

any way. Right. We just learned that they are there uh and and how with their range of validity and right on top just like a human. That's right. Yeah. So

your question was around uh so what are the different models right and how do you how do you do how do you do the engineering reasoning um and and basically all of the things that I just

described distilling requirements picking key design drivers sizing uh etc they all simplify to a couple of primitive operations um and the operations are design

evaluation right so if you have a particular design what is the performance of that design um again modeling the relevant phenomena technology that's that's that's in the design. Another one is design synthesis.

design. Another one is design synthesis.

So if I have a specified performance or specified requirements uh uh vector, what is the design? Mhm. Right. And and

a third class is a little uh is a little more complicated which is finding errors and infilling uh in inside a design. But

basically any engineering query, any engineering task that a human engineer does um reduces to some sequence of these operations. Mhm. Um and so so what

these operations. Mhm. Um and so so what we then have to do is first of all have a a reasoner orchestrator that's good at

taking tasking from from humans in a in an organization and decomposing them into like the right sequence of of of primitive operations and then models some models are neural and some don't

need to be neural um that are actually good at then carrying out those operations. Mhm. Um and so so some of

operations. Mhm. Um and so so some of those some of the things that are behind the orchestrator reasoner are for instance a uh a graph neural network that's just very good at being a physics based surrogate model over the

performance space. Right? Like that's

performance space. Right? Like that's

that's one example. Another one is a geometric reasoner model that allows you to answer questions about relative positioning and packing and interference and and things like that. Some of those geometric

like that. Some of those geometric reasoning operations are very easy to do just algorithmically like software 1.0 style, right? you don't need neural

style, right? you don't need neural capability. Some of the more complex

capability. Some of the more complex ones you can do with VLMs. Um uh I think that there are there there is yet another category of of physical

reasoning operations that we we don't yet know how to solve. Um and I think that there will be a generation of uh of AI models that's coming that are that

are physical world models. Mhm. Um that

will have better intuition for spatial for some of the more complex higher order spatial reasoning tasks. Um and

then you have physics reasoning, right?

You have sort of your your multi-ysics reasoning. Um there's a few different

reasoning. Um there's a few different again approaches. Some of them software

again approaches. Some of them software 1.0. Some of them are neural. One

1.0. Some of them are neural. One

example is we have what I call a labbotomized LLM. Uh which is an LLM.

labbotomized LLM. Uh which is an LLM.

It's no longer good at English. Um but

it is very good at doing programmatic representations of multifysics representations of physical physical system designs and reasoning over those.

So um so that's kind of a federated assembly of models that are all orchestrated by an LLM reasoner um that is also the interface to the user. What

is Archie capable of doing today? Uh how

does that compare to your average hardware systems engineer today and and what's ahead for Archie? Yeah, that's a great question. So, so what we've done

great question. So, so what we've done today, so we're about nine months old as a as a as a company. Um, what we did in our preede is is basically a toy demo um around residential cooling systems,

right? So there like air conditioning

right? So there like air conditioning units, th those kinds of things. Um and

and uh the reason we we chose that is because it's a fairly multifysics domain. So you have fluid flows, you

domain. So you have fluid flows, you have air flows, you have thermal interactions, you have uh um electrical systems, right? So, so it's it's rich

systems, right? So, so it's it's rich but the number of components in a system is not very large and a lot of the the physics phenomenology is pretty linearizable, right? Like you can you

linearizable, right? Like you can you can simplify it. Um so it's kind of rich enough to to be convincing but not so complex that we're bogged down in in data generation for instance right or

the supply chain piece which I want to come back to um uh getting that right.

Um uh and so so that demo exists. We've

put it out uh publicly. Um and uh the question of course is so what is like how good is it, right? Um and there's no other than a

right? Um and there's no other than a vibe test, right? Where you have a human interact with it and you're like oh that's pretty good. Um there isn't really a a good answer today. And so one

of the things that we've invested quite a bit of energy into is eval for physical physical system AIs for physical engineering AIS. Um and uh by

the time this airs I think we'll have an archive paper out um that that that describes our approach to evals. We call

it Archie IQ. Um and the goal is to administer the eval to humans. So an

entry- level human engineer, average human engineer, expert level human engineer and to Archie and for us to have a closed loop uh process of improving uh improving Archie to move up

that IQ IQ scale. Are you do you think you'll keep pushing on residential cooling systems and you'll have a residential cooling system agent that'll eventually be an airplane design agent, a starship design agent? Is that the

right way to think about this or is this a single agent that you're building? No,

I think it's the right way to think about it is is at least initially we have to create uh distinct training data sets for for for each product domain for each product vertical. How do you guys think about that map? Like you know if

the map starts with the residential cooling systems how does it progress from there? Like what does that overall

from there? Like what does that overall map look like to to get to the point of you know engineering AGI for the physical world? What what's on that map?

physical world? What what's on that map?

Yeah. So, so first of all, residential for us was just kind of a toy problem that that we chose. Um, our first market where where we plan to deploy with a customer with a design partner is

actually data center cooling systems. Okay. Which are still thermodynamic

Okay. Which are still thermodynamic engines, right? So, they're not they're

engines, right? So, they're not they're not that different from residential hrag, but they're an order of magnitude more complex obviously much larger and and a very interesting market because they're having trouble coping with

demand from from data center customers.

And we're at a point where cooling systems are like long the long lead item right pacing data center development which is kind of wild. So it is an acute pain point. Um it is in many ways the

pain point. Um it is in many ways the the delivery of those systems is in many ways limited by engineering bandwidth of being able to deliver sort of semi-custom solutions uh to each data

center. And so so we have a very

center. And so so we have a very enthusiastic customer base uh for that for that early deployment. And these

systems are you know these are now order a thousand unique parts in the system.

Okay. Right. The physics domains are quite rich but but the physics again are still pretty linearizable. Right. So

from a synthetic data generation perspective it's a it's a fairly manageable problem which is why we like it as a first as a first vertical. Um

and then we progress and I think we progress principally on the basis of uh of synthetic training data this physics based synthetic training data

complexity. Um and and so we we we our

complexity. Um and and so we we we our expectation is that we will go roughly an order of magnitude up in in product complexity every year. Okay. Um so the second vertical is probably industrial

systems. So things that go into a factory from material handling, industrial robots, mills, lathes, right?

Those those kinds of things. Um then

then we move into mobility domains. Um

which could be automotive, it could be agriculture, mining equipment, right?

those those kind of automotive and heavy machinery and then aerospace and defense. Yeah. Um but just to give you

defense. Yeah. Um but just to give you sort of the order of magnitude progression data center cooling systems roughly a thousand unique parts airplane roughly a million unique parts right so three orders of magnitude between them

and and we think based on sort of our our current projections is roughly one year uh for each order of magnitude. How

much of the data that's required to train the system comes from the usage of the system such that the simple use cases start to bootstrap the more complex use cases? How much of it is fed

to the system from some other training data generation technique that you have?

So we we think we can train Archie to be at the level of an entry-level engineer.

So like college educated um but not particularly savvy in a specific company's products or or some of the in-depth processes and practices. Yeah.

Or a lot of the detailed supply chain, you know, cause data. That's not

something you learn in college. Yeah.

Right. So we think we can do that just based on non-proprietary synthetic data that we produce, meaning non-proprietary to a customer. Um and so the goal is get Archie hired as an entry-level engineer,

right? Get him in the door. um we then

right? Get him in the door. um we then have a relationship with a customer. We

have a data data uh data sharing agreement, right? And and all of those

agreement, right? And and all of those things sorted and then Archie can start learning on the things behind the firewall. Yeah. Right. Obviously subject

firewall. Yeah. Right. Obviously subject

to the customer's acquiescence, right?

But we can we can then ingest their PLM system. We can ingest all of their

system. We can ingest all of their modelbased uh um uh uh modelbased tools and models. Um we can ingest a lot of

and models. Um we can ingest a lot of the real world uh performance of that system. uh quality escapes, right? Those

system. uh quality escapes, right? Those

there's there there is a bunch of stuff there. Um and and and so we think that

there. Um and and and so we think that Archie can move up the expertise scale fairly rapidly from entry level to kind of average to expert engineer on the basis of a lot of that real world data

and of course improvements in uh in in the AI models as well. And do you have a definition when you talk about engineering AGI? We haven't found sort of a

AGI? We haven't found sort of a generally agreed upon definition of AGI.

what's your definition of AGI and how does it fit into the test of someday when you have an engineering AGI you know how will you know you have it yeah so so back to the evals um we have

adopted what's called Bloom's taxonomy which is a cognitive uh knowledge uh taxonomy for human learning developed in the 50s and has been applied to LLMs in in recent years um we have adapted it

kind of to the to the engineering engineering task and so that the taxonomy has kind of a pyramid right at the lowest level you have just recall of information, right? That's relatively

information, right? That's relatively straightforward. Then you have semantic

straightforward. Then you have semantic understanding of the design. So in

addition to recall, like what does this part do? Um then you have the ability to

part do? Um then you have the ability to evaluate a design or a change to a design, right? So what is the

design, right? So what is the performance impact of uh changing this component for instance or resizing something? Um then there is uh the

something? Um then there is uh the ability to find mistakes in a design, right? So this is the error er error

right? So this is the error er error correction and infilling. um than to synthesize a brand new design or a significant change to an existing design. And then kind of the highest the

design. And then kind of the highest the pinnacle which we call E AGI engineering AGI is uh reflection which is some degree of self-awareness of what process did I just use to do the pre the

preceding five levels in this in this hierarchy. Um what process did I use?

hierarchy. Um what process did I use?

What are the limitations of that process? Is there an alternative

process? Is there an alternative process? Where could I have gone wrong?

process? Where could I have gone wrong?

These are the kinds of things that actually most engineers in in the field don't do very well um and is reserved for kind of the senior levels, the experts or the technical fellows, right,

in in in large industrial companies. And

so to us that is that is the certainly the pinnacle of human engineering intelligence is the self-awareness of of and and your own limitations of the of the engineering process. Um and and then

there is a different dimension which is can it generalize across domains with us without us having to train it on the domain. Um so I would say those are the

domain. Um so I would say those are the two axes and you could argue that you can accomplish sort of AGI on one axis AGI on the other axis or AGI on both axes. Um pick your poison. We we hope to

axes. Um pick your poison. We we hope to do both. M what do you think it's going

do both. M what do you think it's going to take to be able to solve you know systems of the current order of magnitude of parts complexity all the way up to airplanes and and more in

terms of the number of parts is it is it simply a matter of scaling laws and you know the LLMs will get better you're going to be able to generate more synthetic data and you know more data more computes bigger models you're going

to be able to kind of solve these much more complex systems in the future or do you think there's going to be research breakthroughs that are needed to get there no research breakthroughs needed I think I I think we operate squarely in

the kind of applied research domain, right? Of where we take existing uh

right? Of where we take existing uh existing research that the frontier labs uh are doing and applying it to our very specific uh our very specific problem.

um we don't see I mean I mean so obviously there are limitations in scaling in terms of compute right to generate so there's CPU compute to generate the synthetic data um because that that's a lot of simulation and

sampling and things like that and then there's GPU compute to train GPU compute for inference um and all of those today I don't think we could do for a million part system right because if you think

about it and maybe to tie back to your question Pat about where does the supply chain come in um is so So, so how do we create these synthetic data sets? Um, so

if you have a million unique parts in a system um in order to compose to some kind of span the design space and and create you know a very large number of

of adjacent systems and some far away systems um you need a catalog of components, a catalog of component models and some rules by which you can

compose those components into systems. Mhm. Um, and your component catalog

Mhm. Um, and your component catalog needs to be a couple orders of magnitude bigger than a typical system design.

Okay. So if you have a million unique parts in a system, your component catalog maybe needs to be a 100 million or a billion uh a billion parts and so a you need to create that component

catalog. Okay. Um, today we do it

catalog. Okay. Um, today we do it manually. Um, we are building a lot of

manually. Um, we are building a lot of automation and a lot of actually AI tools um to help us build that component catalog of component models. um then you have to intelligently assemble those

components. So it's not a tornado flying

components. So it's not a tornado flying through a junk going through a junkyard and assembling a 747, right? But you

actually have some some method for for for creating it. Um and and and then you have to simulate each of those and get a performance vector, right? That's the

that's the training data set. And so

it's supply chain informed because in theory all of the components in your catalog either reflect a real component in the supply chain or you can introduce hypothetical components, right? Because

sometimes innovation is not just assembling things that exist, but saying, "Hey, I need a new motor or I need a new compressor. I need a new this or I need new that." Yeah. Right. And so

you can introduce new components that don't exist, but you know what those are and and and how you plan to get them.

Yes. Right. So that's what we mean by supply chain informed and physics- based means that the rules of composing those components model all of the relevant modalities of interaction that you care

about, the phenomenology of how they interact. Um and uh and that the designs

interact. Um and uh and that the designs that are produced are in fact realizable designs. I'd love to hear the customer

designs. I'd love to hear the customer back perspective. So you were

back perspective. So you were previously, you know, you've been the customer for before. Notably, you were the CTO of Airbus. Um maybe can you can you just walk us through for those of us that haven't been inside the the belly

of the beast of an industrial heavyweight? uh what what is the process

heavyweight? uh what what is the process like to design a new airplane or you know what are all the engineers at these companies doing and what does their life look like before and after engineering

AGI yeah it's a it's it's a very good question so uh so I think I gave you a reasonable abstraction of what an engineer does which is they they operate with some set of requirements they may not be system level requirements right

the engineer may be working on a subsystem or an assembly or a widget um right but they still have requirements they still need to pick the key design drivers from those requirements, figure out what are the solutions, do first

order sizing and then do the detailed analysis. Right? That is that workflow

analysis. Right? That is that workflow gets replicated in kind of a fractal way um throughout the system and throughout the engineering organization which is

designed to mirror roughly the product that you're building, right? Um, and uh, and and one of the reasons that we

position Archie as both an agent, meaning that it's he's fairly autonomous, so it's not an assistant, um, he's really designed to augment a team versus helping an individual.

Right? So you we we are trying to position Archie as an as an employee that joins a team.

uh one of our sort of mission statements is an Archie on every team in every major industrial uh company in the world. Um and uh and Archie joins the

world. Um and uh and Archie joins the team and and the goal is to sell work not uh not software to these companies.

Yes, it is very very difficult to sell software engineering software to a company like Airbus. Um there are hundreds if not thousands of engineering

tools in the in the ecosystem and they are connected in various intricate to put it politely right sometimes sometimes inelegant right kind of

glueware ways and introducing a new tool into that ecosystem is very very complex. On top of it the labor budget

complex. On top of it the labor budget with these companies is much bigger than the methods and tools sort of software budget. Um so you want to tackle the

budget. Um so you want to tackle the labor piece um not not the tools piece.

And so, so Archie is really designed to show up on the team and be a remote engineer. So obviously there's no

engineer. So obviously there's no embodiment. Um, but he shows up on Slack

embodiment. Um, but he shows up on Slack or on Teams or whatever uh collaboration tool you're using and you task him as you would a junior engineer who happens to be maybe at an offshore engineering

center and you interact with him that way. So there is really minimal friction

way. So there is really minimal friction to introducing introducing an Archie into the organization. You don't need to do anything differently. You don't need to change your processes. um you just

have this lower cost entity that shows up. Um Archie will probably be better at

up. Um Archie will probably be better at some things, maybe worse at other things. Um but uh but but the goal is is

things. Um but uh but but the goal is is to to to position him as as a worker.

Why Archie? Where did the name come from? Well, so it's letter A. So it

from? Well, so it's letter A. So it

allows us to have a Bob and a and a Charlotte and a Daniel right down the road. um uh

Archimedes uh architect right all of those are I think connotations that are relevant to to what we're doing what sorts of problems do you think Archie will be tackling and you know how do you expect that changes what the human

engineers on the team are doing so in the data center application which is the first one that we expect to pilot uh pilot this year um we we think that the probably most promising but also the

most applicable use case for Archie as as we bring him to other domains is doing basically product customization.

So semi-custom, they call it specials in the in the in the in the data center cooling world. Um, and this is taking an

cooling world. Um, and this is taking an existing product platform and customizing it for a specific specific customer's use case. And so to meet architectural requirements, right, to

meet functional requirements, to meet building codes, etc. Uh, and that is that tends to be different and fairly bespoke on a case- by case basis. And

that's where most of the engineering hours go. M uh and so that's that's the

hours go. M uh and so that's that's the problem that we're tackling first uh with Archie. Um but that problem

with Archie. Um but that problem translates to other domains pretty well.

Airbus for instance very seldom does a clean sheet airplane design but does a lot of derivatives or a lot of what's called head of variants which are uh a particular product for an airline right

with a specific cabin specific inflight uh configuration inflight entertainment configuration specific cockpit uh requirements etc. right? So that's what most engineers at most industrial

companies do is semi-custom uh sort of semi-customization. If we go to like 2030 20 240 some some long-term time horizon and there are millions and

millions and millions of Archies and maybe Bobs and Charlottes and Daniels out there in the world um and you've achieved engineering AGI for the physical world. How will sort of the

physical world. How will sort of the average person feel the impact of that?

like how will they notice that their life is different as a result of engineering AGI becoming a thing? So I

think uh I think it's a time horizon question right and uh and I I am hesitant to predict anything that's more than like three years out especially in

in these in these uh steeply exponential times. Uh but I think the I think in the

times. Uh but I think the I think in the first instance where Archie shows up on engineering teams and uh and and makes the team more productive and maybe helps

the team do things more efficiently. One

use case that we've talked about is if you have an Archie on every team. Yeah.

Can the Archies coordinate amongst themselves better than the human than the humans between the teams and sort of speak their own kind of speak their own shortorthhand. Yeah. Um and and do those

shortorthhand. Yeah. Um and and do those kinds of things. So, so that's really about improving the the efficiency and the efficacy of existing engineering organizations. So, for the average

organizations. So, for the average person, the impact is lower cost goods, right? And and and and products. Um, so you're saying I can

and products. Um, so you're saying I can buy an airplane perhaps.

Right. Perhaps. Um, I think the the really interesting stuff starts when Archie can design things that we can't, right? And that's kind of the super

right? And that's kind of the super intelligence uh part where it's not just about efficiencies of of existing organizations or increasing the bandwidth of existing organizations but

really designing the stuff that was promised to us in our in the sci-fi books. Yeah. Yeah. Uh so the starships

books. Yeah. Yeah. Uh so the starships and and Dyson spheres and mantrashka brains and uh and those kinds of things.

So like ultimately like I'm I'm a dreamer. That's why that's why uh uh I

dreamer. That's why that's why uh uh I started this company and and that's the future that I want. Um and that's squarely the north star that we that that guides us. But of course we want to

build a pragmatic and profitable business in the meantime. Our partner

Constantine has this term the stochastic mindset which is if you think about you know working with computers in the past it was you know it's predetermined you know you get you ask for this you get

this back versus with models there's you know there's a stochcastic part of the nature by definition. Um how do you think about managing around that in your domain? Because if I think about it, you

domain? Because if I think about it, you know, I can vibe code a web app and it's okay if it breaks. It's not great if I vibe code a airplane and it breaks, right? That's that's disastrous. And so,

right? That's that's disastrous. And so,

how do you think about managing around the stochastic nature for the physical world? Well, humans are pretty

world? Well, humans are pretty stochastic as well, right? So, if you have a junior engineer working on a on a task, they'll make mistakes. Uh they may not do the right thing. They may not be

repeatable.

So I think uh I think the question that we need to quantify and we expect to quantify in in our pilot later this year is what is the error rate coming out of Archie? Um and if that error rate is

Archie? Um and if that error rate is comparable to human engineers then there are a lot of uh there are a lot of checks and balances built into the existing engineering organizations to ensure that a mistake that a junior

engineer makes doesn't bring down an airplane. Yeah. Right. Uh so there's

airplane. Yeah. Right. Uh so there's layers of review, there's milestones, there's tests, right? There's there's a lot of those layers. And so if Archie is of compar has a comparable error rate or

better error rate um then it should be a pretty seamless slotting into the existing processes.

What does the engineering or of the future look like? Do you think we'll have you know one person Airbus equivalents in the future? So again I'm reluctant to to forecast the future beyond sort of three years out and I

think I think in the next couple of years our goal is again an Archie on every team. So 10% of the workforce is

every team. So 10% of the workforce is Archies. um they do the work that humans

Archies. um they do the work that humans maybe find boring, dull, right? Uh um

repetitive um and and maybe there's additional value ads like interarchy interarches coordination uh and things like that and then I can imagine a super intelligence where you

tell it I want uh I want you to start building a Dyson sphere and it starts building the Dyson sphere. The what's in between difficult to forecast

Okay, lightning round. Uh, I'll go first. What application or application

first. What application or application category do you think will break out this year? So, I think we're getting

this year? So, I think we're getting close to physical AIS, not in the sense that we're talking about them, but in the sense of robotics as well as foundation models for ingesting real

world sensor data. Um, and I think both of those are actually quite important uh important building blocks to what we're trying to build. And I think they're

very very close. Humanoids yes or no?

Yeah, I I think I think humanoids if a basis yes humanoids yes on the same basis that we are trying to build an agent that slots into existing teams. I

think humanoid robots can slot into existing existing environments uh much more easily even if they're not the optimal sort of the optimal configuration. What one piece of content

configuration. What one piece of content should AI people consume? I think

everybody should read or go reread uh Azimov's robot series. Ah, good one.

Because I think the laws of robotics were very carefully thought out and are a lot of what actually needs to be built somehow very deeply into these models to

ensure alignment. Very good one. What

ensure alignment. Very good one. What

other startups do you admire? I think

that a lot of the work uh that is being done on uh on models for for for ingesting physical world data um I think are kind of unsung but are incredibly

important and the reason if if if you don't mind a slightly longer answer to the question the reason I think they're important is like look we don't know why neural networks work fundamentally right

but we have a vague like neuromorphic anthropomorphic kind of uh view that oh we're trying to kind of replicate what a human neuron does and and you do enough of them and you get these wonderful emergent properties. Um but then if you

emergent properties. Um but then if you take that further and you say well how do humans acquire knowledge like a human baby the very first thing they do is is

touch right the taste hearing uh eventually vision then language then higher order engineering reasoning spatial reasoning right those kinds of

things that are maybe built on top of language or maybe built on top of some of the other perception uh uh and sensory uh sensory models that they have

um with LLM so or or with with with deep learning we've replicated the the neural structure right to some to some approximation but then we said because of data availability we're going to go

language first and we're going to scrape the whole internet right and then we're going to do video we're also going to do imagery right so vision but we've

skipped touch taste hearing etc right and touch I think is particularly important for building a sense of perception of of and I I keep coming back to spatial reasoning and the

ability to abstractly think about threedimensional threedimensional objects and threedimensional structures.

Um and so I'm very bullish on uh there's a number of companies. One archetype is a good example founded by uh one of my former colleagues at Google um that's working on a foundation model for

ingesting sensor data and that foundation model has actually demonstrated that it can infer some of the physics uh underlying that data right which I think is immensely cool and I think all of those building blocks

ultimately may need to be there for the engineering AGI to happen that just language and vision is not enough. Mhm.

All right, last question. What AI app is your personal favorite to use? The less

interesting answer will would be like chat GPT and cursor which which are which are both there. The perhaps more interesting answer is we just recently uh you know as we were coming out of stealth, we wanted to produce a video

that kind of shows that northstar vision that we've been talking about of ultimately engineering AGI and the path to get there. So we we worked with a studio called IMIX um which is an

Israeli LA kind of kind of thing. They

did the the Trump Gaza video. Do you

guys know the that went viral maybe a month or so ago? Okay. Um and uh and they did a fully AI generated kind of two-minute Archie biopic clip which is

on our people can see it on our website.

Um and it was completely AI generated.

It was done in two weeks and it was done at about I would say a 50th of the cost of what a comparable piece of content would have would have been without AI.

Um but everything uh voice uh video, music, everything in that in that short film is completely AI generated using a variety of models, some of which are

their own um many of which they they uh stitch together from the ecosystem. But

to me, I I was I was absolutely blown away. Very cool. Wonderful. Paul Lee,

away. Very cool. Wonderful. Paul Lee,

thank you so much for joining us today to share more about your vision for the future of engineering AGI for the physical world. We're excited for the

physical world. We're excited for the day where you bring down the cost of buying it buying an airplane and in the meantime excited to see what Archie can do. It's our pleasure. Thanks for

do. It's our pleasure. Thanks for

inviting us. Thank you.

[Music]

[Music]

Loading...

Loading video analysis...