LongCut logo

AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus

By No Priors: AI, Machine Learning, Tech, & Startups

Summary

Topics Covered

  • ChatGPT Was Too Weak for the Physical World Until 2023
  • Language Models as Orchestration Layer for Science
  • Materials Science's Agricultural Revolution Moment
  • The Closed Loop Future of AI Research
  • Labor Shortages Create Massive Opportunity at AI-Physical World Interface

Full Transcript

Today on no prior we're talking with Liam Fetis. Liam is one of the

Liam Fetis. Liam is one of the co-creators of chat GPT which I think almost everybody uses at this point. He

was the VP of post training at OpenAI and before that was at Google Brain where he worked on a variety of really early AI innovations. Liam will be telling us a bit about periodic labs his

company which is focused on building an AI foundation lab for atoms. In other words, how do we impact the physical world, material sciences, chemistry, etc. using AI? Very exciting topic and excited to be talking with him today.

Well, yeah, thank you so much for joining us today on No Priors.

Yeah, thank you so much for having me.

It's great to see you.

Yeah. So, uh maybe what we can do, I I think you're doing incredibly interesting things in terms of alternative types of models specifically for material sciences, for the physical world. Effectively, what you're building

world. Effectively, what you're building is um an AI foundation lab for atoms, which I think is fascinating.

That's right.

But maybe we can start with is a little bit more of your background. you know, I think you were uh VP at OpenAI. You

worked on one of the first trillion parameter models ever, etc. Could you tell us a little bit more about just like what got you here? And

yeah, um so even further back, I was a physics major um in undergrad. Um spent

some time doing dark matter research. Um

we're we had a apparatus that was directionally sensitive to dark matter's direction.

Um so it was very interesting.

Why why are they sorry I'd love to come back to this, but why are there so many physicists in air right now? So you look at Dario who runs um Anthropic, of course. Yeah.

of course. Yeah.

Uh you look at Adam Brown at Google. You

look at a variety of people and they all kind of have these physics backgrounds.

Yeah. My old manager Josha also physics non-anthropic.

Yeah. Why why do you think that is?

I think it's a great way to think about the world. It's like very principled um

the world. It's like very principled um very like hard-nosed scientists um very careful and I don't know I think it's just it's such a incredible field. You

have such high leverage in computer science in AI. Mhm.

And so I think a lot of physicists were seeing that um particularly in like high energy physics um after the discovery of the Higs um I think a lot of high energy physicists were sort of looking for

what's next.

Um ultimately it becomes bottlenecked on the new um apparatus for you know pushing the next energy frontier and I think a lot of physicists were looking at their skill set and looking at the progress elsewhere and saying like hey I

think I could be a huge contributor elsewhere. This has been fascinating to

elsewhere. This has been fascinating to see like string theorists and people working on black holes and all sorts of effects like kind of moving into AI.

Absolutely. It's it almost feels like we're recreating the Manhattan project or something except now what we're seeking is you know different forms of intelligence. So

intelligence. So yeah that's right kind of that perspective. So sorry to interrupt. So you know you studied

interrupt. So you know you studied physics you worked on dark matter.

That's right. And then um I was basically and then in grad school in physics I was always gravitating towards the machine learning problems. I was looking at um particle reconstruction and it's thinking effectively machine

learning problems but it felt if I really wanted to push frontier of machine learning I should be in you know computer science so ended up at Google brain um was overlapping with the first

year residents there absolutely remarkable group of people remarkable period for Google brain um I mean it's the era of when there's the creation of like distributed training strategies

mixture of experts the transformer it was a really rich period in that history and it was a fun kind of like Cambrian era where you people were really pushing

the frontier with just like a handful of GPUs really small collaborations. The

field was a much much earlier and I think there was a lot of diversity and entropy in the research and it was very fun.

So it's kind of late uh 2010s or so something like that.

This was 2016 2017.

Um so Google brand at that point was still really small and eventually was subsumed by Deep Mind or combined with Deep Mind. Mhm.

Deep Mind. Mhm.

So I was at uh Google for many years. Uh

mostly just doing architecture work. So

was really pushing um sparsity that allows for uh you more efficient serving of models at scale and just really pushing the scale of what we could do

towards late 2022. Um really became excited about the creation of products.

The technology was getting very compelling and so I ended up at OpenAI with um some other Googlers as well.

Mhm. And and what did you work on specifically at OpenAI?

Well, so the goal was we need to come up with some productionization of GPT4. So

we OpenAI had GPT4. It was pre-trained and there were some like um rough uh post trains on it and there's questions about like okay how do we turn this incredibly powerful model into products

and we're all spitballing ideas like writing bot uh coding bot you know very natural at the time. Some of our least interesting ideas were a meeting bot. So

he would just sit in a Google Meet, take notes, and then send out like to-dos after. But John Schulman was very

after. But John Schulman was very opinionated. He's like, "We think we

opinionated. He's like, "We think we should keep it very general. Let's do a chatbot." And that became a large part

chatbot." And that became a large part of the effort um for those few months.

That's what Yeah. So he worked on Chat GPT.

That's right.

Um and obviously I felt like that was kind of the starting gun of this whole AI revolution or at least in terms of people's awareness like I'd started investing in the area beforehand, right? But it it seemed like almost as a

right? But it it seemed like almost as a secret up until ChachiPT came out and then suddenly everybody realized that there's this powerful technology available.

Yes.

How did that lead you to materials and atoms and you know the physical world again? I know that was sort of your

again? I know that was sort of your starting point in terms of economics but what brought you back given how much is being transformed right now through language?

I think just the inevitability of connecting these systems to the physical world. The opinion that I and others

world. The opinion that I and others held spread periodic was you're not going to see the same kind of acceleration in science and technology unless you start connecting these things

to the physical world. Science

ultimately isn't sitting in a room thinking really hard. Um you have to conduct experiments, you have to learn from them, you have to interface with

reality. And the creation of chat GPT in

reality. And the creation of chat GPT in late 2022 um was a you know important technology but it was still far too weak like we couldn't have done periodic on

technology of that era. Mhm.

I think over the next few years past that we saw ever improving models. Um we

saw reasoning. I think like test time inference became really important. Uh

that led to more reliable error correction, more reliable tool use. And

we see like the rise of coding agents and other agents.

And I think those were foundational technologies necessary to then connect these systems to the physical world.

Like it was just not impo not possible with like the AI technology of 2022. I

guess the other thing that's missing from the physical world is data or at least data that's easily accessible. So

you look at something like um the big foundation models on the language side and they're basically trained on the internet as a major corpus. It's

augmented in all sorts of ways with other data sources.

How do you think about that for what you're doing where you're trying to model atoms in the physical world and how all that stuff kind of works.

Yeah. So experiment I mean so we have simulation physics simulations and we have experiment and you know I think exactly as you're pointing out ML systems are good on the data you've

trained them on on the tasks you've trained them to do um I think sometimes there's like this mythology of AGI ASI RSI and I think we see increasingly

powerful systems but they do become limited if they don't have access to the the raw data to actually make informed decision.

How how much data do you need? And so I know that um there's some data scale related yes uh research and other things in terms of um how you kind of hill climb towards like a really good model.

Y how many experiments do you need to run or how many data points do you need or how do you think about the diversity of data points you need to generate? I'm

a little bit curious like what does that actually look like tangibly?

Um so there is some generalization from the existing models. So we don't need to reproduce a system that can um understand and write English or write code. So we're kind of like leveraging.

code. So we're kind of like leveraging.

And are you using open source for that or close source models or some we use a combination.

Yeah. So for example like periodic spend zero effort on improving coding models.

Um we're you know incredibly impressed by codeex cloud code and so that's been a huge accelerator for the company. Um

but focus our machine learning efforts where um you know the existing frontiers is not sufficiently good for us. I think

going back to the data question we're leveraging call it order tens of trillions of tokens that went into open source models and that's given this like very like

foundational understanding but once we start moving into specific um discovery areas chemical spaces um we can see um a

very high level of sample efficiency. So

the system isn't starting as like a randomly initialized neural net. It has

a strong prior on the world and where does that prior come from? What

data that informs that? Just general

just just like you know papers the internet as you're pointing out.

Yeah.

Um however that's insufficient. Um one

of the engineers on our team was looking at a reported material um property and it was just sort of extracted values from literature and it was really interesting to see the reported value

spanned many orders of magnitude. And so

you train an ML system on that and it's like well the best you can do is model this distribution but you're no closer to like a ground truth and that's where experimental data comes in where you now

have a grounding in this.

Um but really important it's not just like a pool of data. It's this

interactive closed loop system that is so powerful. Um once you have the

so powerful. Um once you have the experimental data you can look through it. You can look for aberrations. You

it. You can look for aberrations. You

can look for patterns. you can look for consistency with um simulation data with literature and then that helps drive the next set of experiments. So it's not just a pool of data it's very active loop

I see and then um how do you think about diversity data? So I look at something

diversity data? So I look at something like um alpha fold or some of the protein folding y uh related um models which are amazing right if you think about it I used to work as a biologist and we would

you know a crystal structure would take years if it happened at all because you weren't necessarily certain if you could crystallize the specific protein under certain reagent conditions in a way that would be performant for actual extra uh you know crystalallography and

everything or NMR or whatever approach you took for structure and then sort of alpha fold comes out and you can just arbitrarily model anything right on the protein world which was you know amazing as a breakthrough. Um but it was

a very specific data set that already existed that had lots and lots and lots of structures over decades of work. How

hard do you have to bootstrap that for every single materials domain or do you choose specific ones that you think can then generalize? We have seen internally

then generalize? We have seen internally the greatest advances where we have an abundance of data in some space and that that has led to the highest rate of

acceleration internally. Um but I think

acceleration internally. Um but I think you can think of um different levels of generalization and for systems that are strongly governed by quantum mechanical effects there is some generalization there.

I see.

Um but like if you produce a system that has modeled um quantum mechanical objects really accurately it's not really helping much on like you know fluid dynamics or you know like another kind of like level of

abstraction.

And so the generalization we're seeing is quite good. Um, but there's almost like a first principles you can Oh, that's so interesting. So, you could do like here are the basic steps of chemical synthesis. Here's quantum

chemical synthesis. Here's quantum mechanics. Here's different aspects of

mechanics. Here's different aspects of how atoms interact in general or vanderal forces or things like that.

Absolutely. Oh, so interesting. Yeah,

that's cool. And then from a architecture perspective, is there anything unique that you're doing or interesting or can you talk a little bit about how you're actually constructing some of these models on top?

Yeah, so uh language models are incredibly powerful. It's a very natural

incredibly powerful. It's a very natural interface. Uh and so we continue to use

interface. Uh and so we continue to use these um but we think about them almost as like an orchestration layer. So

that's sort of a a co-pilot assistant but also like a system that can direct um experiments and it's almost it's orchestrating other specialized models

as well. So we do construct neural nets

as well. So we do construct neural nets that um are specially designed for atomic systems where there's like some symmetry awareness um and those have

much lower latency and they've been like fine-tuned for that. And so basically you kind of think of this like orchestrating layer that can ingest literature. It can go through our

literature. It can go through our experimental data. It can go through

experimental data. It can go through different uh modalities but they can also use specialized neural nets as tools as reward functions. So it's

it's like an overall system.

Okay. Yeah, that makes a lot of sense.

Yeah, I've seen a lot of people architect those sorts of approaches even for things like customer support or other areas like it seems like it's the common architecture that's emerging as you're doing these different use cases of these models. Yeah.

Yeah. But transformer has been very powerful.

Yeah. And that's really cool. So if I look at the language world, one of the things that was pretty unique about it and it's the reason that I think these companies like OpenAI, Anthropic and others are growing so fast is it just

plugged into a very big domain of human existence which is all language and all language means enterprise software and enterprise interactions and it means consumer behavior. It's basically how we

consumer behavior. It's basically how we interact with the world.

Yes.

Um it seems like there's a little bit more of a leap for other areas. So for

example in robotics there's really interesting things different types of robots that exist in the world but the footprint of that is quite limited relative to language and the same seems to be true for material sciences. So how

do you think about where you're going to commercialize this first or who you're going to work with or are there specific domains of products that you're working on first. So we've begun working very

on first. So we've begun working very closely with scientists. Um we've

treated periodic as our customer zero and seeing how can we transform how this field of science is done.

Mh.

But there's huge opportunities across all of these industries, all these enterprises that are interfacing with the physical world. People who are bottlenecked by materials engineering,

process engineering. And again, those

process engineering. And again, those are kind of this like the same natural interfaces where engineers are asking questions about their data. They're

trying to find aberrations. They're

trying to debug machinery. They're

trying to get to a better formulation.

It's actually a quite universal thing as well. And so we've kind of created our

well. And so we've kind of created our little testing ground internally. And

now we're sufficiently excited about the tech we've been building and to see this acceleration for advanced manufacturing more broadly. And is your model going to

more broadly. And is your model going to be um uh developing materials for other third parties? Is it developing your own

parties? Is it developing your own materials that you then sell in the market? Like because it almost reminds

market? Like because it almost reminds me a little bit of a biotech model.

Yeah.

Where in biotech you can either partner with a big pharma and then effectively help them create a drug and take a royalty on it or you can build your own drugs. How do you think about that in

drugs. How do you think about that in the context of what you're doing?

We're thinking about us ourselves as an intelligence layer for for these companies. So you can think about system of record, control plane for different um experiments and

getting to solutions. Um but like you're saying there is um a very interesting aspect of some breakthroughs here could have you know really high value and it

might be more akin to a discovery model like we've seen in biotech and elsewhere but starting thinking about our just as a software business.

Have you ever read the diamond age?

Very fast.

Have you read the diamond age?

No, I haven't.

It's the Neil Stevenson book. It's

basically this book about it was written in the 90s. Okay. And there's two key concepts in it. One key concept is um there's effectively an AI tutor that's unleashed on the world and it kind of um

teaches huge numbers of young girls all sorts of skills and it's a it's this very interesting thing about AI education. And then in parallel

education. And then in parallel why in particular uh basically this um AI research scientist creates a primer for his daughter and the Chinese uh steal it and

clone it and distribute it across the country. And because he built it for

country. And because he built it for young girls, it suddenly every young girl in China has it.

So that's the reason it's this very China theft of IP kind of thing. And

then the other part of the book is about um matter pipes into everybody's homes and they all have 3D printers and you download blueprints and it just creates whatever you need in the physical world and some people start evolving different

nanobots to do different things. It's

this very advanced kind of AI plus materials kind of future world.

Yes. Um, what is your vision or conception of what our world looks like in 10 years, assuming periodic is successful?

Well, I mean, I think as you're pointing out, you're going from systems that aren't just writing essays, not just writing software, but to literally generating matter.

And I think it's has pretty profound implications to semiconductors, aerospace, energy. And I think it's it's

aerospace, energy. And I think it's it's incredibly important for can we increase like the pace of just like the physical development of the world. I mean we see

how quickly the digital realm is changing. Um software engineering now

changing. Um software engineering now looks wildly different than even 6 months ago.

Um but I think we see like you know similar opportunities in the physical world. Of course, like atoms are hard

world. Of course, like atoms are hard and so you will have um some limits of physics, but just because atoms are hard doesn't mean there's not an order of

magnitude or two to speed up. Um just

making sense of huge amounts of data and getting to solutions more quickly.

Um yeah, so I think what we're trying to do is give humanity this agency for atomic rearrangement um synthesis and we think it's going to just be a huge accelerator. M

accelerator. M so I mean if our physical world could keep up at some fraction to our digital world I think life will just feel dramatically different.

Yeah. It's kind of the revolution that that could really come. Yeah. It kind of reminds me of almost the materials equivalent of the agricultural revolution.

Yeah.

We suddenly had a massive spike in productivity of out and it seems like there's been all sorts of bottlenecks that have constrained us until now that you fix are trying to address.

That's right.

Yeah. what um what aspect of the work that you're doing are you most excited about the iteration with our between these groups of people I mean it's like this is just irreducibly a multi-disiplinary problem

we have physicists and chemists working really closely with some of the top AI researchers in the world working closely with some of the best engineers in the world and this multidisciplinary like

really close collaboration is just absolutely incredible because seeing firsthand how a field can fundamentally change people who have been doing research for in some cases

decades in a field and now seeing like oh under these systems under intelligent systems it could look this very different uh different way and I mean I use like an analog to machine learning a

lot going back to the early Google brain days where the frontier is pushed forward with by a few GPUs and a few people now you look at this era where it's really like industrialized and

there's dozens hundreds of researchers working together with hundreds of thousands, millions of GPUs dictated and driven by scaling laws.

Everything is about scaling. It's given

that predictability. It's allowed us to put huge amounts of capital into this field.

And I think the physical sciences, physical engineering will have a very similar property where we establish these scaling properties and um bring

that mindset. And so periodic in this

that mindset. And so periodic in this field is really thinking about how do we bring much larger scale sets of experiments to bear on this and

intelligent systems have enabled this automation has enabled us and you really need both um an improvement to automation where you be can soon become

uh create bottlenecks in intelligence and I mean the scientists very much feel this where they're not used to working at that level of throughput and they just can't simply make sense of so much data.

So interesting. Yeah. So I guess in terms of um scale here, one of the real benefit, one of the things that's really benefited the the the Frontier Labs on the LLM side is just scale of capital and therefore scale of GPU and scale of

data of course.

Um is this similarly a capital inensive area in your mind?

Yeah, we will require more capital. Um

GPUs are so extraordinarily expensive.

Um and what's interesting is just the compute cost relative to physical infrastructure is actually surprising where you know so much money is spent on the compute uh that the physical

infrastructure sometimes is actually lower but you know has very large lead times and there's intrinsic difficulty of having these well-c calibrated well

functioning physical systems um but from a capital perspective it's primarily a compute cost yeah it's really interesting if you look up um the cost of a Stanford postto for example,

relative to a machine learning engineer, it's like such a big difference. And you

you've really, you know, my takeaway is that um many people working in science, particularly in academic center setting, are very undercompensated relative to sort of their societal value.

Absolutely.

And so I always like it when companies kind of help bring people into the into the fold in terms of both human impact, but also, you know, that um that ability to do things at real scale and, you

know, really do things a different way.

So it must be very exciting for the people on your team.

Yeah. I mean it's like I mean some of the scientists who have joined us are you among the best in the world and it's been absolutely incredible working with them.

Yeah. I mean it sounds like you've built such an amazing inter interdisiplinary team. Are there specific roles you're

team. Are there specific roles you're actively looking for right now or key things that you really want to hire up?

Absolutely. So on our site we have decomposed the world into bits and atoms. um you know it's a loose taxonomy but on bit side we're really thinking

about um mid-training pre-training roles from the AI side always more infrastructure roles and on Adam's side like control engineering system engineering uh but also now thinking too

about you know expanding that with like project engineering so uh yeah roles etc that's really cool so I I think one of the things that everybody's really thinking deeply about or is excited

about right now is AGI ASI sort of these advanced systems that are as good as humans or better than humans at different things, right? Or are very generalizable in terms of their abilities to do a broad swath of things.

How do you think about that both in the context of what's happening over the overall foundation model curve because obviously you were very integral in terms of the development of some of these systems and then how do you think about that applied specifically to some of the areas you're working in?

I think one fallacy is thinking about intelligence as a scaler. We've

consistently seen these systems have a a very odd spikiness and it's actually possible to architect a system that is worldclass on some math domain but then

you could do some pertabbations to the questions and actually degrade it sub substantially. So it's like a bad high

substantially. So it's like a bad high school student.

And so there's this like odd spikiness to these systems. So basically you can make a system that's like a genius at one thing and not very good at a bunch of other stuff.

And I guess the point I was making is those fields can actually be quite adjacent. Um, so like sometimes the

adjacent. Um, so like sometimes the generalization can be non-intuitive.

Um, but one way I think about you recursive self-improvement is really kind of akin to neural architecture search from you know roughly 10 years ago and I think there's

a very clear path for software engineering. So these systems have

engineering. So these systems have become so incredibly impressive on this on this domain as a result of huge amounts of data really cheap verifiable environments like you know you can check

unit testes go from failing to passing with just a few CPUs it's basically instantaneous there's no domain expertise gap between an AI researcher software engineer

um and obviously this will become and is becoming a larger contributor to the next generation of the system. When do

you think it just flips into we just uh everything is machine self-improvement versus human directed or or needs a lot of human intervention? So, do you think that's two years away? Do you think that's 5 years away? Do you think that's 10 years away?

Well, I guess like building on what I was saying is I think there's a domain caveat to that. So rolling

forward that software engineering self-improvement, I think you're going to have a system that can write um complete repositories, identify bugs, refactor code, but it doesn't suddenly understand biology.

Sure.

Right. It's just like there's a domain gap there in knowledge.

But even beyond that, there's um sets of strategies done in um software engineering that differ from scientific or engineering strategies. it's you're

not operating under it's not like decision-m under uncertainty to the same degree. It's like very verifiable and

degree. It's like very verifiable and that's driven so much of our work. Mhm.

Um so in that domain I think it's happening nowish and you know and I think we'll see the same thing too for AI research. That's a

slower outer loop because now the experiment isn't just checking some unit tests passing but it's checking what was the scaling property um did this model converge um what's the generalization of

the system that requires GPUs that requires you know many hours of experiments but I think that will also will um and those are all eval that people use today as they're looking at existing models and so they do have that utility

function that feedback loop that can be just driven by self-learning.

That's right. That's right. But again

like the connection of these things to the physical world is going to be so critical because both of those systems are being trained in a closed loop against that domain. So it's a closed

loop for doing software engineering, a closed loop for doing AI research and that's the premise of periodic like we need to have these closed loops of actually doing science of actually doing engineering

and these two I mean these two domains are how I think the rest of the world will go with some delay and this is again like the foundational technology that

super interesting. Do you think you need

super interesting. Do you think you need um sufficiently good robotic systems in order to have that closed loop for what you're doing?

Um in other words, do you need something like PI or Skills or something else to work in order for uh periodic to hit that escape velocity in terms of a closed loop system?

No, but it's a huge accelerator. Mhm.

Um the goal for periodic is to generate high quantity, highquality data, diverse data and automation is assistance to

that. So right now we employ people as

that. So right now we employ people as well and we have autonomous parts that are just, you know, very reliable. If

you had a dextrous humanoid who could wander into an unstructured lab and make sense and follow instructions reliably, that would be a huge accelerator. Right

now, the automation of physical systems is requires a very careful design and it's slow. But I think with improvements

it's slow. But I think with improvements in robotics, this just going to accelerate this. But already the

accelerate this. But already the reliability of these sort of like hybrid systems is sufficient to produce huge amounts of um reliable data but it's just going to accelerate us further.

Yeah. One of the reasons I ask is um I used to own this company uh color um and we built our own liquid handling robotic systems, right? We'd buy liquid handling

systems, right? We'd buy liquid handling robots, but then we'd have to adjust them dramatically. We had like cameras

them dramatically. We had like cameras that would use ML to monitor the system and sort of make adjustments. We had to 3D print parts to decrease vibrations on the platform because we were dealing

with such small uh volumes of liquid, right?

And so there was enormous amounts of customization versus just having and the firmware for it was awful and writing against that was painful versus just having like a robotic system that would work like a modern system in all the ways that you'd conceive that, right? as that's one of the reasons I

right? as that's one of the reasons I was asking is if you really want to do high throughput experiments, you need these underlying systems to be able to do all the liquid handling and to do all the titration of stuff and all the rest of it. So

of it. So yeah, that's right. I mean I think it's look right now we're using almost like more like offtheshelf robotics. It's

like very simple, very commoditized.

Um not doing like a huge amount of innovation on on that front. But again,

like as these um more general robotic systems come to be like hit this reliability threshold, it's going to be a massive accelerator for spinning up new labs as well.

Yeah, you've seen such a wide range of different things happen in the AI world since indeed your work at Google, I guess at this point about a decade ago. Um and so you were there during the

ago. Um and so you were there during the birth of the transformer model. You were

there um for the birth of chat GPT. Um

what are you most excited about outside of periodic over the next few years? in

terms of what's happening with AI.

I mean, of course, robotics.

Mhm.

Again, I'm like I'm just so excited about the interface of AI systems with the physical world, and we're approaching one angle of that,

which is science, engineering, and we need that data in order to make those advances. But simply just agency

those advances. But simply just agency and control of the physical world um via robotics is going to be transformative.

M um so I'm I'm very excited about these interface layers. I think that's going

interface layers. I think that's going to be such a massive opportunity cuz I mean you know how many software engineers are there in the world versus people who build like the physical world and there's just labor shortages

everywhere. So yeah, I think it's going

everywhere. So yeah, I think it's going be a very interesting decade.

Oh, amazing. Well, thank you so much for joining us today.

Yeah, thanks so much. It was really great chatting today. Yeah.

Find us on Twitter at No Prior Pod.

Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new

you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-briers.com.

Loading...

Loading video analysis...