Sohn 2023 | Patrick Collison in conversation Sam Altman
By Sohn Conference Foundation
Summary
## Key takeaways - **US Worst for Crypto Companies**: The US is the worst country in the world to have a crypto company in because you just can't offer it at all, which is a historically big statement as it's hard to think of the last technology for which that was the case. [01:03], [01:24] - **ChatGPT: Summarization King**: Sam's most common ChatGPT use case is summarization by far, as he wouldn't keep up with email and Slack without it, even manually posting messages into it. [01:45], [02:08] - **Synthetic Data Solves Bottlenecks**: Once over the synthetic data event horizon where models are smart enough to make good synthetic data, data bottlenecks won't matter, as the naive plan of scaling transformers with internet tokens will run out but that's not the plan. [03:42], [04:20] - **Need IAEA for AI Compute**: Like an IAEA for AI, a global regulatory agency for extremely powerful AI training systems should require systems over a compute threshold to submit to audits and pass safety evals before release. [07:07], [08:05] - **Mechanistic Interpretability Crucial**: RLHF is not the right long-term solution; we need mechanistic interpretability to understand internals, find circuits or neurons, and tweak them for robust alignment, but sufficient work is not happening. [12:45], [13:28] - **AI Friends Outnumber Human Ones**: Kids will have more AI friends than human friends, and we need norms to distinguish AI from humans as people struggle to differentiate even with early weak systems like Replica. [16:27], [17:13]
Topics Covered
- US Worst for Crypto Innovation
- ChatGPT Excels at Summarization
- Need Global AI Regulatory Agency
- Mechanistic Interpretability Reduces Doom
- AI Friends Outnumber Human Friends
Full Transcript
Patrick, over to you.
All right. Thank you, Graeme. Um, and,
uh, and thank you, Sam, uh, for being with us. Uh, last year, I actually
with us. Uh, last year, I actually interviewed Sam Bankman Freed, which was, uh, which was clearly the wrong Sam to be interviewing. So, it's it's good to correct it, you know, this year with
the the right Sam.
Uh, so, um, so we we'll start out with the topic on, uh, on everyone's mind.
Uh, so, uh, when will we all get our world coin?
Uh, I think if you're not in the US, you can get one in a few weeks. If you're in the US, maybe never. I don't know. It
depends how truly dedicated the US government is to banning crypto.
So, World Coin launched around a year ago or so.
It actually then the it has not actually it's been in beta for yeah, maybe a year, but it will go live relatively soon outside of the US. And in the US, you just won't be able to do it. Maybe
ever. I don't know.
All right. Um, so, uh, which is a crazy thing to think about that like this is, you know, think whatever you want about crypto and the ups and downs, but the fact that the US
is the worst country in the world to have a crypto company in or you just can't offer it at all is sort of a big statement, like historically big
statement.
Yes. Yes. It it is. Yeah. It's hard to think of the last technology for which that was the case. Maybe
like the Europeans are supposed to do this, not us.
Yeah. Supersonic air travel or something for like usive then. Yeah.
Um All right. So, uh I presume almost everyone in the audience is a chat GPT user. Uh what is your most common chat
user. Uh what is your most common chat GPT use case? Like not when you're, you know, testing something just you actually want to get you where chat GBT is purely, you know, an instrumental tool for you?
Summarization by far. Uh I I've gotten like I don't know how I would still keep I wouldn't still keep up with email uh and Slack without it. Um but you know posting a bunch of email or Slack
message into it hopefully will like build some better plugins for this over time but even doing it the manual way works pretty well.
Uh have any plugins become part of your workflow yet? browsing and the code
workflow yet? browsing and the code interpreter once in a while, but honestly they have not, for me personally, they have not yet kind of like tipped into a daily habit.
Um, so obviously um it seems very plausible that we're on a trend of um I don't know super linear, you know, realized returns in terms of the
capabilities of these models. But who
knows? Maybe we'll asmtote soon. Not
saying it's likely, but it's at least a possibility. If we end up in the world
possibility. If we end up in the world where we asmtote soon, what do you think kind of exposed we will, you know, look back on the reason as having been too
little data, not enough compute, what's what's the most likely bottleneck?
So, so yeah, look, I I really don't think it's going to happen, but if it does, I think it would be I think it'd be that there's something fundamental about our current
architectures that limits us in a way that is not obvious today. So like, you know, maybe we can
today. So like, you know, maybe we can never get the systems to be very robust and thus we can never get them to like reliably stay on track and reason and understand when they're making mistakes or, you know, and thus they can't really
like figure out new knowledge very well at scale. But I don't have any reason to
at scale. But I don't have any reason to believe that's the case.
And some people have made the case that we're we're now training on kind of order of all of the internet's tokens and you can't grow that, you know, another two orders of magnitude. Uh I
guess you could counter with yeah the synthetic data generation. Do you think data bottlenecks matter at all?
I I I think you just touched on it like as as long as you can get to like over the synthetic data event horizon where that the model's smart enough to make good synthetic
data, I think it should be all right. We
we will need new techniques for sure. I
don't want to like pretend otherwise in any way. Um the like naive plan of just
any way. Um the like naive plan of just like scale up a transformer with pre-training tokens from the internet that will run out, but like that's not the plan. Mhm.
the plan. Mhm.
If um so one of the big breakthroughs in I guess uh GPD 3.5 and four is RHF. Uh
um you know if you Sam uh personally sat down and did oral all of the RLHF would the model be significantly smarter like does it matter who's giving the
feedback?
uh I think we are getting to the phase where you really do want smart experts giving the feedback in certain areas to get the model to be as generally smart
as possible. So uh
as possible. So uh so will this create like a crazy um you know battle for the smartest grad students?
I think so. I don't know how crazy of a battle it'll be because there's like a lot of smart grad students in the world, but smart grad students I think will be very important.
And h how many how many like how should one think about the question of how many smart grad students one needs like is one enough or do you need like 10,000?
It's we're studying this right now. We
we really don't know uh like how much leverage you can get out of one really smart person where kind of the model can help and the model can like do some of its own uh rling like this is we're
deeply interested in this but it's a very open question.
Um should nuclear secrets be classified?
Um probably yes.
I don't know how effective we've been there. I I think the reason that we have
there. I I think the reason that we have avoided nuclear disaster is not solely attributable to the fact that we classified the secrets but that we did something we did a number of smart
things and we got lucky. You know the like amount of energy needed at least for a long time was like huge and sort of required the power of nations and we
made the IAEA which I think was a good decision on the whole and a whole bunch of other things too. So like yeah, I think probably anything you can do there to increase probability of a good
outcome is worth doing. Classification
of nuclear secrets probably helps. Um
doesn't seem to make a lot of sense to not classify it. Uh on the other hand, I don't think it'd be a complete solution.
What's the biggest lesson we should take from you know our experience with nuclear nonprofet considerations that are now central? So,
first of all, I think it is always a mistake to draw too much inspiration from a previous technology. Every
everybody wants the analogy. Everybody
wants to say, "Oh, it's like this or it's like that or we did it like this, so we're going to do it like that again." And the shape of every
again." And the shape of every technology is just different. Um,
however, I think nuclear materials and AI supercomputers do have some similarities and this is a place where we can draw more than usual parallels and inspiration. But I would caution
and inspiration. But I would caution people to to overlearn the lessons of the last thing. Um I think something like an IAEA for AI like and I and I
realize how naive this sounds and how difficult it is to do, but getting a global regulatory agency that everybody everybody signs up for for extremely
powerful AI training systems seems to me like a very important thing to do. So I
think that's like one lesson we could learn. And if it's established, you
learn. And if it's established, you know, it exists tomorrow, what's the first thing it should do?
Any systems over we we the easiest way to implement this would be a compute threshold. The best way to implement
threshold. The best way to implement this would be a capabilities threshold, but that's harder to measure. any any
system cape over that threshold I think should submit to audits um full visibility to that organization be required to pass certain safety evals
before releasing systems um that would be the first thing and yeah some people on the I don't know how
one would characterize the side but um uh let's say the uh more pugilistic side uh would say that all sounds Uh but uh but China is not going to do that. Uh
and uh and therefore we'll just be handicapping ourselves. Uh and you know
handicapping ourselves. Uh and you know consequently it's a less good idea than it sees in the surface.
There are a lot of people who make incredibly strong statements about what China will or won't do that have like never been to China, never spoken to an
someone who has worked on diplomacy with China in the past. uh really kind of know nothing about complex high stakes international relations.
I think it is obviously super hard but also I think no one wants to destroy the whole world and there is reason to at least try here.
So one of the there's also I think there there there's like a bunch of unusual things about this is why it's dangerous to learn from any technological analogy of the past.
There's a bunch of unusual things here.
There's of course the energy signature and the amount of energy needed, but there aren't that many people that are making the most capable GPUs and you know you could require them all to put in some sort of monitoring thing that say if you're talking to more than
10,000 other GPUs like you got to whatever there there's options.
Yeah. Um so one of the big surprises for me this year uh has been the progress in the open source models uh and um it's been this kind of frenzy pace the last I don't know 60 days or something. um you
know how good do you think the open source models will be in a year or say uh well actually I'll just ask that first yeah good I I mean I think there's going
to be two thrusts to development here there will be the hyperscalers best closed source models and there will be the progress that the open source community makes and it'll be you know a
few years behind or whatever a couple years behind maybe um but I think we're going to be in a world where there's very capable open source models and people use them for all sorts of things
and and and and the creative power of the whole community is is going to impress all of us and then there will be the frontier of what people with the giant clusters can do and that will be
fairly far ahead and I think that's good because we get more time to figure out how to deal with some of the scarier things. David Leuon made the case to me
things. David Leuon made the case to me that um like the the set of economically
useful activities um uh is a you know is a well is clearly a subset of you know all possible activities uh and that
pretty good models might be sufficient to address most of that first set. Uh,
and so maybe the super large models will be very scientifically interesting and maybe you'll need them to do things like generate further AI progress or something. But for most like practical
something. But for most like practical day-to-day cases, maybe an open source model will be sufficient. How likely do you think that future is? I think for many super economically valuable things, yes, the smaller open source model will
be sufficient. But you actually just
be sufficient. But you actually just touched on the one thing I would say, which is like help us invent super intelligence. That's a pretty
intelligence. That's a pretty economically valuable activity. So is
like cure all cancer or discover new physics or whatever else and that will happen with the the biggest models first.
Should Facebook open source llama at this point? Probably.
Um should should they adopt the strategy of open sourcing their foundation models or just llama in particular?
Uh, I think Facebook's AI strategy has been like confused at best for some time, but I think they're now getting really serious and they have extremely
competent people and I expect a more cohesive strategy from them soon. I
think they'll be a surprising new real player here.
Um uh is there any new discovery that could be made that would meaningfully change your P doom probability either by
elevating it or by decreasing it?
Um yeah I mean a lot like I think that's most of the new work between here and super intelligence will move that probability up or down.
Okay. Is there anything you're particularly paying attention to? Any
kind of contingent fact you'd love to know if we could?
So, I first of all, I don't think RLHF is the right long-term solution. I don't
think we can like rely on that. I think
it's helpful. It certainly makes these models easier to use. Um but what you really want is to understand what's happening in the internals of the models
and be able to align that you know say like exactly here is the circuit or the set of artificial neurons where something is happening and tweak that in a way that then gives a robust change to
the performance of the model. And if we can get that the the mechanistic interpretability stuff.
Yeah. Well, that and then beyond like there there's a whole bunch of things beyond that but but that direction if we can get that to reliably work uh I think everybody's pdoom would go down a lot.
And do you think sufficient interpretability work is happening?
No.
Um why not? You know, a lot of people say they're very worried about AI safety. So that seems, you know,
safety. So that seems, you know, superficially surprising. Most of the
superficially surprising. Most of the people who say they're really worried about AI safety just seem to spend their days on Twitter saying they're really worried about AI safety or you know
any number of other things. Uh there are people who are worried about very worried about AI safety and doing great technical work there. Um but we need a lot more of them. We're we're certainly
shifting a lot more effort inside a lot more like technical people inside Open AI to work on that. Um but what the world needs is not more uh AI safety
people who like post on Twitter and write long philosophical diet tribes. It
needs more people who are like going to do the technical work to make these systems safe and reliably aligned. And
uh I think that's happening. It'll be a combination of people that have that good ML researchers shifting their focus and new people coming into the field.
A lot of people on this call um are active philanthropists and most of them don't post very much on Twitter. Uh you
know they hear this exchange like oh maybe I should help fund something in the interpretability space. You know if they're having that thought you know what's the next step.
One strategy that I think has not happened enough is grants like grants to single people or small groups of people that are very technical that want to
push forward a technical solution. um
and are you know maybe in grad school or just out or in undergrad or whatever I I think that is well worth trying. They
need access to fairly powerful models and open trying to like figure out programs to support independent alignment researchers. uh but I think
alignment researchers. uh but I think giving those people financial support is like a very good step.
To what degree in addition to being somewhat capital bottlenecked is the field skill bottleneck where there are people who maybe have the intrinsic characteristics required but don't have the I don't know four years or learn of
learning or something like that that um are also a prerequisite for there being effective. I think if you have a smart
effective. I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to make them, you know, take a smart physics researcher and make them into a
productive AI researcher. Um, so we don't have enough talent in the field yet, but it's coming soon. Um, we have a program at OpenAI that does exactly this, and I'm astonished how well it works.
It seems that pretty soon we'll have um uh agents that you can converse with in very natural form, low latency, full duplex, you can interrupt them, like the
whole thing. Um uh and obviously we're
whole thing. Um uh and obviously we're already seeing with things like character and replica that you know pro even nent products uh in this direction uh are getting you know pretty
remarkable traction.
It seems to me that these are likely to be a huge deal and maybe we're we're we're substantially underestimating it.
Um again, especially once you can converse through voice.
Um a you think that's right and then b if that's right um you know what do you what do you think the likely consequences are?
Yeah, I do think it's right for sure.
Like I've you know a thing someone said to me recently that has stuck with them is that they're pretty sure their kids are going to have more AI friends than human friends.
And I don't know what the consequences are going to be. Uh I one thing that I think is important is that we we we establish a societal
norm soon that you know if you're talking to an AI or a human or a sort of like weird AI assisted human situation.
Um but people people seem to have a hard time kind of differentiating in their head even with the these very early weak
systems like you know replica that you mentioned. uh it's
mentioned. uh it's whatever the circuits in our brain are that crave social interactions seem satisfiable with like for some people in some cases with an AI friend and so
figuring out handle that I think is tricky. Someone recently told me that a
tricky. Someone recently told me that a frequent topic of discussion on the replica subreddit is uh how to handle the emotional challenges and trauma um
of upgrades to the replica models uh because suddenly your friend becomes you know somewhat labmomized or at least a somewhat different person and you know presumably these interlocutors all know
that replica is in fact an AI but somehow to your point the uh sort of um our emotional response doesn't necessarily seem all that different.
What I think we're heading to is is a societ like I think what most people assume that we're heading to is a society with one sort of supreme being super intelligence you know floating in
the sky or whatever and I think what we're heading to which is sort of less scary but in some sense is still as weird is a society that just has a lot of AIs integrated along with humans and
yeah there have been movies about this for like a long time like there's like you know C3PO or whatever you want in Star Wars like people know it's a AI.
It's still useful. They still interact with it. It's kind of like cute and
with it. It's kind of like cute and person like although you you you know it's not a person. And in that world where we just have like a lot of AIs that are contributing to the societal
infrastructure we all build up together.
That feels manageable uh and and less scary to me than the sort of single big super intelligence.
Yeah. Yeah. Um
well this is a a financial event. So um
how will um you know how well there's kind of a debate in economics as to whether changes in the working age population uh push real interest rates
up or down. Um because uh you know you have a whole bunch of counterveailing effects and you know yeah they're more productive but you also need capital investment to kind of make them
productive and and so forth. Um, uh,
will how will AI change real interest rates?
I try not to make macro predictions.
I'll say I think they're going to change a lot.
Uh, okay. Well, uh, if um, how how will it change um, uh, uh, measured economic growth?
Uh I think it should lead to a massive increase in real economic growth and I presume we'll be able to measure that reasonably well.
And we'll will at least the early stages of that be a kind of an incredibly capital intensive period because you know we now know which cancer curing factories um or you know pharma
companies we should build uh and you know what exactly the right you know reactor designs are and so forth.
I I would take the other side of that.
Again, we don't know. But I I would say that like human cap capital allocation is so horrible that if we know exactly what to do, even if it's expensive,
you you mean like the present day capital allocation done by humans?
Yeah.
Or or or or you mean uh like the allocation of the actual people themselves across society into different roles?
Uh no, I meant the way that we allocate like like how much capital allocation done by humans.
Yeah, by done by humans. How much do you think we spend on cancer research today?
How much we spend on cancer research a year? Um I don't know probably well it
year? Um I don't know probably well it depends if you count the pharma companies but it's it's probably about like eightish n billion from the NIH and then how much the drug companies spend but I don't know probably some small
multiple of that again but if it's like under 50 billion okay I was gonna I was going to guess total guess between 50 and 100 billion per year. Um, and if an AI could tell us
per year. Um, and if an AI could tell us exactly what to do and spend like $500 million a year for one single project, which would be huge for a single project, but it was the right answer.
Yep.
That would be a great efficiency gain.
Yep. Okay. So, so, so, so um we we will actually become significantly more capital efficient uh once uh once this technology That's my guess.
Yeah. Interesting.
um for open AI uh you know obviously you guys want to be and are a pre-minent research organization but you know with respect to commercialization is it more
important to be a consumer company or an infrastructure company uh I I am a believer as a business strategy in platform plus killer app I think that's like worked for a bunch of
businesses over time for good reason I think the fact that we're doing a consumer product is helping us make our platform much better. Um, and I hope over time that we figure out how to like
have the platform make the consumer app much better, too. So, I think it's like a good cohesive strategy to do them to do them together. Um, but as you pointed out, really what we're about, we'd like
to be the best research or in the world, and that is more important to us than any productization.
um and building the org that can make these repeated breakthroughs. Uh
they don't all work. You know, we like went we've gone down some bad paths, but we have figured out more than our fair share of the paradigm shifts and I think we're have the next big ones will come from here too. And and that's really
kind of what is important to us to build.
Which breakthrough are you most proud of OpenAI having made?
Um the whole GPT paradigm. I think like that was I think that was a kind of thing that that has been transformative and an
important contribution back to the world and comes from the sort of work the multiple kinds of work that OpenAI is is good at combining.
If um Google IO is tomorrow I think or starts tomorrow. Um, if you were CEO of
starts tomorrow. Um, if you were CEO of Google, how would you do?
I think Google's doing a good job. Um, I
think they they they have had like quite a lot of focus and intensity recently and are really trying to figure out how to how they can move
to really remake a lot of the company um for this this new technology. So, I've
been I've been I've been impressed.
[Music] um are these models in and and their attendant capabilities actually a threat to search or is that just a sort of you
know superficial response that um is a bit too hasty? Um
I suspect that they mean search is going to change in some big ways but not not a threat to the existence of search. So I
think it would be like a threat to Google if Google did nothing, but Google is clearly not going to do nothing.
Um, how much important ML research comes out of China?
I would love Sorry, go ahead.
I would love to know the answer to that question. Like how much does it come out
question. Like how much does it come out of China that we get to see? Not very
much.
Yes. Yes. I mean from the published literature nonzero but not a giant amount.
Do you have any sense as to why? Because
you know the like the card like the number of published papers is very large and there are a lot of Chinese researchers in the US who do you know
fantastic work. Uh and so why is the
fantastic work. Uh and so why is the kind of per paper impact from the Chinese stuff relatively low? I mean,
what a lot of people suspect is they're just not publishing the stuff that is most important.
Do you think that's likely to be true?
I have I don't trust my intuitions here.
I just feel confused.
Um, would you prefer OpenAI to um to, you know, figure out a 10x improvement to training efficiency or to inference efficiency?
It's a good question. Um it sort of depends on how important synthetic data turns out to be. Uh
I mean I guess if forced to choose I would choose inference efficiency.
But I think the right metric is to think about like all the compute that will ever be spent on a model training plus all inference and try to optimize that.
Right. And and and you say inference efficiency because that is likely the dominant term in that equation.
Probably. I mean if we're doing our jobs right.
Right. Right. um you know when GBD2 came out like only a very small number of people noticed um sort of that that had happened and you know really understood what it signified to your point about
the importance of the breakthrough um is there a GPT2 moment happening now um there's a lot of things we're working
on that I think will be GPT2 like moments uh if they come together but nothing there's nothing like released that I could point to yet and with high
confidence say this is the GPT2 of 2023 but I hope I hope by the end of this year by next year that will change.
What's the um what's the best nonopen AI AI product that you use?
Uh honestly the only like product that I think of is like really I don't use a lot of things. I I kind of like have a
very narrow view of the world but um Chat GBT is the only AI product I use daily.
Is there a uh is there an AI product that you wish existed and that you think the capabil that our current capabilities make possible or will very soon make possible that you're looking forward to?
I would like a co-pilot like product that controls my entire computer so they can like look at my Slack and my email and Zoom and IME messages and my
like massive to-do list documents and just like kind of do most of my work some kind of Siri plus sort of thing.
Yeah.
Yeah. And you mentioned you know curing cancer. uh is there an obvious
cancer. uh is there an obvious application of these techniques and technologies to science that again you think we have or having capabilities for that you don't see people obviously
pursuing today? There's a boring one and
pursuing today? There's a boring one and an exciting one. The boring answer is that if you can just make really good tools like that one I just mentioned and accelerate individual scientists each by
a factor of three or five or 10 or whatever um that probably increases the rate of scientific discovery by a lot even though it's like not directly doing science itself.
The more exciting one is I do think that same a similar system could go off and start to read all of the literature, think of
new ideas, do some limited tests in simulation, email a scientist and say, "Hey, can you run this for me in the wet lab?" And
lab?" And probably make real progress.
And that's, you know, that I don't know how exactly kind of the ontology, you know, works here, but um you can imagine um uh building these
better sort of general purpose models that are, you know, kind of like a human. We'll go read a lot of
human. We'll go read a lot of literature, etc. Maybe smarter than a human, better memory, you know, who knows what. And then you can imagine um
knows what. And then you can imagine um you know, models trained on certain data sets that are, you know, doing something nothing like what a human does. you know
you're mapping uh um I don't know uh crispers to you know edit accuracies or you know something like that and really of a special purpose model in you know some particular domain um do you think
that the uh scientifically useful application of these models will come more from the first category where we're kind of creating better humans uh or from the second category where we're creating these predictive architectures
for problem domains that we that are not you know currently easy to work with I I really don't know. I I this is like, you know, most areas I I am willing to
like kind of give some rough opinion. In
this one, I never have un I don't I don't feel like I have a deep enough understanding of the process of science and how great scientists actually work
to say that I I like I mean I guess I would say if if if we can figure out someday how to build models that are really great at reasoning,
then then I think they should be able to make some scientific leaps on themselves but by themselves. that that requires more work.
Um uh you know, OpenAI has a um has done a super impressive job of fundraising and has a very unusual capital structure uh for you know the nonprofit and the
Microsoft deal and like all all the things are weird capital structures underrated? Like should organizations
underrated? Like should organizations and companies and founders be thinking more expansively about you know people default or at least start defaulted like all right we're just like a Delaware Corp. Yeah, open eye as you pointed out
Corp. Yeah, open eye as you pointed out broke all the rules. Should people be breaking more corporate structure rules?
I suspect not. I suspect it's like a horrible thing to uh I suspect it's like a horrible thing to innovate on. Like
you should innovate on like products and science and not corporate structures.
Our like the shape of our problem is just so weird that despite our best efforts, we had to do something strange.
But it's been like um it's been an unpleasant experience on time suck on the whole and if like you know the other efforts I'm involved in have always had
normal capital structures and I think that's better and do we underestimate the extent so so a lot of companies you're involved with
are very capital intensive um and maybe open is perhaps the most capital intensive although who knows maybe Helium will overtake or something but And you know, do we underestimate the
extent to which capital is a bottleneck, the bottleneck on on realized innovation? Like is that some kind of,
innovation? Like is that some kind of, you know, common theme running through the various efforts you're involved with?
I Yes. I mean that is my there there's basically like uh there's like four companies that I'm I would say involved with other than just like having written a check as an
investor and all of them are super do you want to enumerate those for the sake of the audience?
Uh OpenAI and Helion are the things I spent the most time on and then also Retro and Worldcoin. Um but you know all of them raised
minimum nine digits before any product at all and and and you know in OpenAI's case much more than that and I have and all have raised in the nine digits
before as like either a first round or before releasing a product um and they all take a long time you know many years to get to a release of a product and I
think this is just like there's a lot of value in being willing to do stuff like this and it fell out of favor in Silicon Valley at some point.
Um, and I understand why. Like it's also great for companies that like only ever have to raise a few hundred thousand or a million dollars and get to profitability. But I think we over
profitability. But I think we over pivoted in that direction and we we have forgotten collectively how to like do the high-risisk highreward hugely
capital and time inensive bets and those are all so valuable. we should be able to support both and this touches on um you know the
question of you know why aren't there more Elons uh in that uh you know the I guess the two most successful hardware companies in the broadest sense started in the last 20 years were both started
by the same person you know that seems uh like a pretty surprising fact and obviously Elon is you know singular in in many respects uh uh but do you you know what's what what's your answer to
that question, you know, do do we black people with his particular set of, you know, circumstances? Is it actually a
know, circumstances? Is it actually a capital story along the lines of what you're saying if it was your job to cause there to be more, you know, SpaceX's and Teslas in the world? Uh uh
and you know, maybe you're you're trying to do some of that yourself, but um uh like if you had to kind of uh push in that direction systematically, uh what would you be trying to change?
I have never met another Elon. I have
never met another person that I think I can that can be developed easily into another Elon. He is sort of this like
another Elon. He is sort of this like strange N of one character. Um I'm happy he exists in the world of course but you
know also a complex person. Uh
I I don't know how you get more people like that. Uh like it's I don't know. I
like that. Uh like it's I don't know. I
don't know. I don't know what you think about how to make more. I'm curious.
I don't know. I suspect there I mean I suspect there's something in the culture on both the founder and the capital side. Um the kinds of companies
capital side. Um the kinds of companies the founders want to create. uh and then the disposition uh and and to some extent though maybe to a lesser extent the the fund structure uh of the uh of
the sources of capital a surprise for me um you know as I've learned more about the space over the last you know 15 years is the extent to which the um
the you know there's a finite or essentially finite uh set of um of funding models in the world uh And each
has a particular set of incentives and for the most part a particular sociology uh and you know that's evolved over time like venture capital was itself an
investment. PE in its modern form uh was
investment. PE in its modern form uh was uh was essentially an invention. Um and
sorry I said VC was an investment of invention. Uh and uh and so you know I I
invention. Uh and uh and so you know I I I doubt we're done with that process of funding model invention. Uh, and I suspect there are models that are at least somewhat different to, you know,
those that prevail today that are, you know, somewhat more amendable to to this kind of innovation.
Okay. So, one thing I'm excited about is I think, and you're a great example of this, but I think all of the people who became tech billionaires in the last
cycle are pretty, no, most are pretty interested in putting serious capital into long-term projects and the
availability of capital for significant blocks of capital upfront for high- risk, highreward long-term
term longduration projects that rely on fundamental like science and innovation is going to or already has dramatically changed. So I think there's like going
changed. So I think there's like going to be a lot more capital available for this. You still need like the Elon like
this. You still need like the Elon like people to to do it. Um,
and like one project I've always been tempted to do, um, is say, "Okay, we're going to identify the, let's say, 100 most talented people we can find that
want to work on these sort of projects.
Um, we're going to give them like 250K a year. So like enough money for 10 years
year. So like enough money for 10 years or something. So it's like, you know,
or something. So it's like, you know, give a 20-year-old tenure or something that feels like tenure. um let them go off and without the kind of pressure that most people feel have the certainty to go off
and explore for a long period of time and like you know not feel the like very understandable pressure to make a ton of money first um and put them together with great mentors and a sort of a great
peer group and then the financial model would be like if you start a company if not that's fine like it'll be a writer politician thinker whatever if you start a company the vehicle gets to invest like on predefined terms
Yeah.
Um I think that would pay off. Uh and
someone should do it.
That's kind of the university model, I guess. Uh and I don't mean that as like
guess. Uh and I don't mean that as like a this already exists. You know, you're just, you know, reinventing the bus or something. H I mean that like it's it's
something. H I mean that like it's it's maybe suggest evidence that it can work.
And you know, universities are usually not that good at good at supporting their spinouts. H but it happens to at
their spinouts. H but it happens to at least some extent. And yes, one of the thesis for ARC in fact is that uh by you know maybe formalizing this somewhat more than it is by encouraging it somewhat more than it tends to be that
that actually might be a pretty effective novel.
So Sana um um you know my co-founder at Arc you know I've known her since we were teenagers you know more than half our lives and uh Patrick Sue the other co-founder you know she she did her PhD
with and so she'd known him for a long time and so to your point about sort of the the long-term investment you know part of how I was comfortable with it is uh you know I' I'd known this person for
again a really extended period as you think about something like you know you mentioned Retro or some of these other companies where you didn't how do you decide whether the person is the kind of person you can uh you know undertake
this uh this super long-term expedition with actually I had known Joe for a long time um he was like maybe it's a bad example well I guess that's a question is that in fact do do you need to have known the person for a long time
it's it's super important it doesn't always work but I try to work with people that I've known for like a decade plus at this point you know you don't want to only do that you want some new energy and volatility in the mix but
having a significant proportion of people that you've known for a long time, worked with for a long time. I
think that's really valuable. Like in
the case of OpenAI, uh I had known Greg Brockman for a long time. I met Ilia for maybe only like a year before, even a little bit less than we started the company, but spent a lot of time with
him together. Um and that was like a
him together. Um and that was like a really good combination. Uh but I derive like great pleasure from work having like working relationships
with people over decades through multiple projects. Uh, and like it's a
multiple projects. Uh, and like it's a lot of fun to like feel like you're building together towards something over that has a very long arc.
Agreed. Um,
which uh which company that is not thought of as an AI company will benefit the most from AI over the next 5 years?
I think some sort of investing vehicle is going to figure out how to use AI to be like an unbelievable investor and just have a crazy outperformance
like RENT with these new technologies.
Is there like an operating company that you look at?
H uh well do you think of Microsoft as an AI company?
Let's say no for the purpose of this question.
Okay. I think Microsoft will transform themselves across almost every axis with AI.
And is that because they're just taking it more seriously or because there's something about the nature of Microsoft that makes them particularly, you know, uh, suited to this, understood it sooner than others and have been taking it more seriously than
others?
Um what do you think the likelihood is that we will come to realize that GPT4 is
somehow significantly overfit on the problems uh you know in the domains that it was trained on or you know how would we know if it was or do you even think
about overfitting as a kind of concern?
I think about the code forces problems you know before 21 versus after 21 where you know does better on the earlier ones etc. I think the base model is not
significantly overfit but there's we don't understand the RHF process as well and we may be doing more like brain damage to the model in that than we than
we even realize.
Um, you know, do do you think that G like the the the um generalized measure of intelligence
exists in humans as anything other than a statistical artifact? And if the answer to that is yes, do you think
there exists an analogous um uh sort of um common factor uh in uh in models?
I think it's a very imprecise notion. Uh
but there's clearly something real that it's getting at in humans and for models as well. So I think we probably like
as well. So I think we probably like over there's like way too many significant figures when people try to talk about it. But you know it's definitely my
it. But you know it's definitely my experience that very smart people can learn I won't say arbitrary things but a lot of things very quickly. There's also
some people who are just much better at one kind of thing than another. And
you know I don't want to like debate the details too much here but I'll say as a general thing I believe that model intelligence will also be somewhat funible
and based in your experience you know thinking all this AI safety stuff how if at all do you think synthetic biology should be regulated?
I mean, I would like to not have another synthetic path to gym cause a global pandemic. I think we can all agree that
pandemic. I think we can all agree that wasn't a great experience. Wasn't that
bad compared to what it could have been.
Uh, but I'm surprised there has not been more global coordination after that. Uh,
and I think we should have more.
So, so, so you know, what do we actually do? Because I think some of the same,
do? Because I think some of the same, you know, challenges apply as an AI. I
think this is a a a production apparatus for you know synthetic pathogens is not necessarily that large and you the observability and telemetry is difficult and no I I think this one's a lot harder than the AI
challenge where we do have some of these characteristics like tremendous amounts of energy and you know lots of GPUs. I
haven't thought about this as much. I
would ask you what we should do. I think
that like if someone told me you know this is a problem what should we do? I
would call you so what should we do?
I don't know that I've read the prescription. We we clear I mean the
prescription. We we clear I mean the easy thing to say I'm not sure how much it helps is we need a lot more general observability wastewater sequencing uh things like that. We should do that regardless even it doesn't help us
against you know synthetic biology attacks. uh and the fact that we don't
attacks. uh and the fact that we don't have a giant correlational data set uh of the you know pathogens that people are infected with and then sort of longitudinal you know health outcomes uh
is just like a crazy fact in general. Um
uh and then obviously there's a somewhat innumerable set of you know uh infectious diseases like classes of infectious diseases uh that you know people uh tend to be most susceptible to
and you know obviously co itself uh being being an example of this and so I think we could make a lot more progress
in um in penvariant uh both treatments and vaccines than we do and than we have Um and so I think the particular thing
like if it is true that uh co was engineered uh I think you know instances of that set of slight modifications to already existing uh infectious diseases we can probably significantly improve
our protections too. Um obviously the concerning category would be you know completely novel pathogens and that's presumably a an sort of an infinite search space. Then you I think get into
search space. Then you I think get into how do you I mean there's a again finite set of you know ways to enter cells and uh and you know receptors and so forth.
So so maybe you know you can use that to come up with you to kind of tile the space of possible treatments but uh and and do need to invest in you know a lot more surplus manufacturing capacity than
we have uh for for you know novel vaccines and hopefully you know mRNA platforms and similar make you know it easier to have general purpose manufacturing capabilities there. But as
you can tell from this kind of long answer, I I don't think there's a silver bullet. And uh it's I think plausible
bullet. And uh it's I think plausible that even if you did everything that I just said, well, you you still would not have enough. So I think it's hard
have enough. So I think it's hard getting way better at rapid response, treatment vaccination whatever that that all seems like just an obvious thing to do that I would have again hoped for more progress on by now.
Yeah, I I I I very much agree with that.
and and and clinical trials um uh like that that was the you know the the limiting step uh in uh in co and I I
think it's you know it's at this point been widely you know reported and remarked upon that you know we had the uh we had the vaccine candidate in January uh and you know everything after that I mean some some of what happened
after that was obviously manufacturing skill up but but much of what happened after that was just like how long it took us to tell that you know this actually works and this is uh this is
sufficiently safe and um and I don't know that seems among the lowest hanging fruit in the entire you know biomedical ecosystem to me 100%.
But I guess um I guess your your your investment in trial spark is uh is uh consistent that observation. Um so Ezra
Klene and Derek Thomson um are are writing a book um about um sort of the idea of an abundance agenda and that you
know so much of the um of the left of of the liberal sensibility uh is about uh sort of you know
forbearance and you know some kind of you know quasi neopuritanism etc. uh and you know they believe and I guess are you well have been making the case in some of their respective public
writings thus far but but you know for the purpose of this book the argument that actually for uh for society as equal and prosperous and environmentally
friendly and you know so forth to you know actually realize many of these values we care about we'll need just like a lot more stuff um in many many
different domains um more more kind of the Henry Adams curve realized uh and they frequently observe that
permitting in the broadest sense all sorts of you know well-intentioned but self-imposed restrictions uh are are the you know rate limiting factor in making
this happen um you maybe most obviously with the energy transition across all the different things that you're involved with you know to what degree do you think this dynamic of self-imposed
restrictions and strictctures is um is you know the the relevant variable in the progress that actually ensues.
It definitely seems huge but I don't I I think also there's a lot of people who like to say well this is the only problem and if we just could resolve like permitting at large we'd all be
happy and I don't think it's quite that simple either. Um I do think that the
simple either. Um I do think that the current system so I totally agree that we need much more abundance and you know my personal beliefs are abundant energy and abundant intelligence are going to
be two super important factors there but there's many others. Uh
certainly with as we start to like get closer on being able to deliver a lot of fusion to the world, understanding just how painful the process to like go get
these things out is like disheartening to say the least. Um, and it's pushing us to look at all sorts of like very strange things that we can do sooner rather than like wait for
all of the permitting processes that will need to happen to connect these to the grid.
It's like much easier to go desalinate water in some country that, you know, just has their own nuclear authority or whatever, right?
Um, I think it is a real problem and I think we don't have that much societal will to fix it, which makes it even worse, but I
don't think it's the only problem.
Um, if uh Ezra and Derek interviewed you uh and I guess they should for this book and you know asked you for your number
one diagnosis as to you know that which is limiting uh the abundance agenda. Uh
what would you uh what would you nominate?
um like societal collective belief we can actually make the future better like and the level of effort we put on it.
Every every like additional sort of like gate you put on something when these things are like fragile anyway I think makes them like tremendously less likely to happen. And so you have like
to happen. And so you have like you know it's like really hard to like start a new company. It's really hard to convince people it's a good thing to do.
like right now in particular there's just like a lot of skepticism of that.
Then you have this like regulatory thing. It's going to take a and you know
thing. It's going to take a and you know it's going to take a long time so maybe don't even try that and then you know it's going to like be way more expensive. So it's just like there's too
expensive. So it's just like there's too much friction and doubt at every stage of the process of idea to like mass deployment in the world and I think it makes people just try less than they
used to or believe less.
when we first met whatever it was uh 15 or so years ago, uh Mark Zuckerberg was preeminent in the technology industry and in his 20s and you know not that
long before then uh you know Mark Andre was preeminent in the industry and in his 20s and you know not that long before then you know Bill Gates and and
Steve Jobs and so forth. um like
generally speaking and you know for for most of the history of the uh of the software sector um you know one of the top three people has been in their 20s
and it doesn't seem that that's true to me today. I mean there's some great
me today. I mean there's some great people in their 20s but but I'm not sure that this problem it's not good. It's it's something has really gone wrong and there's a lot of
there's a lot of discussion about what this is, but like where are the great founders in their in their 20s?
It's not so obvious. Uh there's, you know, there's definitely some I hope we'll see a bunch I hope this was just like a weird accident of history. Um,
but maybe something's really gone wrong in our educational system or our society or just like how we think about companies and what people to aspire to.
But I think it is uh it is worth significant concern and study.
On that note, I think we're at time. Uh,
thank you so much for for doing this interview.
Thank you very much. Uh,
and thank you to the folks at Sen and to uh and to Graeme for for hosting.
Loading video analysis...