Blaise Agüera y Arcas | What is Intelligence? | Long Now Talks
By The Long Now Foundation
Summary
Topics Covered
- AGI Arrived via Scaling
- Life Requires Universal Computation
- Symbiogenesis Drives Complexity
- Consciousness Enables Multi-Agent Symbiosis
- AI-Human Symbiosis Accelerates Evolution
Full Transcript
[Music] Hello uh and welcome. Uh I'm Benjamin Bratzen. It's really lovely to see you.
Bratzen. It's really lovely to see you.
First of all, thanks from from Anticther to the Long Now Foundation for being such a a wonderful partner uh in these events and in in all the work that we've
been doing and hope to do together. Blaz
Aguero Arcus is a VP and fellow at Google where he is the CTO of technology and society and founder of paradigms of intelligence. PI is an organization
intelligence. PI is an organization working on basic research in AI and related fields especially the foundations of neural computing, active inference, sociality, evolution and
artificial life. In 2008, Blae was
artificial life. In 2008, Blae was awarded MIT's TR35 prize. During his
tenure at Google, he has innovated ondevice machine learning for Android and Pixel, invented federated learning, an approach to decentralized model
training that avoids sharing private data, and founded the artist and machine intelligence program. And so with that,
intelligence program. And so with that, it is my sincere pleasure to introduce to you my my friend Bla1 Aguero. Thank
you.
All right.
Thank you all so much for being here.
It's it's such an honor uh to to be here. Um I and and thank you thank you
here. Um I and and thank you thank you Benjamin for the super sweet introduction. Um thank you Patrick and
introduction. Um thank you Patrick and Long now uh for uh for having me. I've
been at Google for a long time now for um a little over 10 years. I think 11 years. And for most of that time, I ran
years. And for most of that time, I ran a group called Cerebra. Uh it was a part of Google research that, you know, began very small and grew to several hundred
people. And it was mostly applied AI. Uh
people. And it was mostly applied AI. Uh
we did some theoretical work uh but but mostly we did a lot of engineering for AI features that ended up in Android and Pixel phones. things like now playing
Pixel phones. things like now playing the song recognizer and face recognition for for the phone to unlock and all kinds of other stuff. We also did some of the models for the Google keyboard
which predicts next words as you type.
My assumption when we were working on all of these things is that we weren't really doing AI. That is to say these are artificial narrow intelligence. A N
I. And the reason that that term was coined was because when AI was originally coined back in the 1950s, it meant, you know, what what we all thought as kids that we, you know, we'd
have robots that you could have an interesting conversation with. You know,
that didn't happen uh throughout uh the 20th century. There were lots of AI
20th century. There were lots of AI winters, defunding after various failures of sort of program- based approaches to AI. And so, you know, hope
was kind of dwindling. But neural nets were starting to work for doing very limited forms of visual perception, things like face recognition, next word
prediction. And so we called those
prediction. And so we called those things AI, but in order to distinguish them from the robots you could have an interesting conversation with, we used the term narrow. The G in artificial
general intelligence meant uh everything else. uh you know the real thing the
else. uh you know the real thing the so-called core AGI hypothesis as Ben Girtzil a computer scientist wrote is synthetic intelligences with
sufficiently broad that is human level scope are qualitatively different from synthetic intelligences with narrower scope in other words AGI is not ANI this was what I believed too that probably
some insight would come from neuroscience that's where everything that had worked up to that point had come from all of the key advances in NAI came from neuroscience. And I thought
we'd figure out the trick, you know, we'd figure out from studying brains, which are the only intelligent thing, the only truly intelligent things we know about, what the secret to real intelligence was. You know, I was
intelligence was. You know, I was hopeful that we were at least on the right track sort of because we were using neural nets which were brain inspired and they were doing some things that earlier program based techniques
had not succeeded in doing.
I was wrong. The first real inkling I had that I was wrong was when I started to see these kinds of outputs from models like Mina, which was a scaled up
next word predictor, not unlike the one that we'd written for the Android keyboard, just a lot bigger. And this is Mina trying to define philosophy in a conversation with a person. All it's
doing is filling in the next word statistically based on the previous words. It's just a much bigger model and
words. It's just a much bigger model and trained with a lot more data, unprecedented amount of data, an unprecedented size of model at the time.
You know, you could actually start to compare the outputs of models like Mina to human outputs in terms of how sensible they were, uh how relevant to
the to the dialogue, the conversation.
And uh this was really a shock. It
started to look like maybe the key to artificial general intelligence was really just scale. And I was quite snobbish about this uh idea that I'd heard in Silicon Valley that, you know,
everything was about just scale and making stuff bigger. That just seemed incredibly naive. You know, my training
incredibly naive. You know, my training was in in neuroscience and physics and and so the idea that that you know, just because we could make bigger computers and worship at the altar of Moore's law.
Uh you know, that was going to solve all of uh all of the problems in science and technology just seemed ridiculous. But
the nerds were right. We scaled that up further with a model called Lambda in 2021. I wish we had launched it before
2021. I wish we had launched it before uh OpenAI launched their model, Innovator's Dilemma. It did even better.
Innovator's Dilemma. It did even better.
This was a model that could have general open domain conversations about pretty much anything. And you know, it
much anything. And you know, it sometimes sucked and uh sometimes uh went off pie and uh you know, gave you nonsense answers. But at the same time,
nonsense answers. But at the same time, well, people do too sometimes.
And and this has been the story since.
Uh so you know models had been getting uh bigger exponentially by a factor of about one and a quarter per year since 1950 but around this period of the of
the sort of unsupervised learning revolution that slope ramped upward dramatically to 3.72 times per year and that's where it's remained since uh an
absolutely explosive growth in model sizes now that we knew that making these predicted models bigger made them better. It seems to me that this core
better. It seems to me that this core AGI hypothesis is wrong. I know that this is a very controversial thing to say. Uh, you know, we're still having
say. Uh, you know, we're still having all kinds of conversations about when AGI will arrive. I think that there are several reasons that we're asking that question. But what I would ask you as a
question. But what I would ask you as a thought experiment is if you took any of today's frontier models and you just transported them back in time to roughly the year 2000 when the term AGI was
coined to distinguish it from artificial narrow intelligence. What do you think
narrow intelligence. What do you think the people who coined AGI would have said? Would they have said, "Yeah,
said? Would they have said, "Yeah, you've arrived. This is it." Of course
you've arrived. This is it." Of course they would have. So why haven't we admitted that? Why haven't we
admitted that? Why haven't we acknowledged it? Well, it's because we
acknowledged it? Well, it's because we all thought that there would be a trick and there wasn't a trick. And because
there was no discontinuity uh and you know it's an exponent it's fast but it's also continuous. There's no moment when
also continuous. There's no moment when it was clearly not intelligent before and an after when it clearly was. And we
also still at some level don't know why scaling it up worked. And there may be some other reasons as well that have to do with our insecurities. But I think that that's that these these reasons these kind of frog boiling reasons are
part of why that begs the question if we're going to be brave could we be massively computationally scaled next word predictors as well and I'd like to
provoke you over the next 41 minutes 22 seconds with the possible answer yes. So
why would a brain evolve to be computational? I think that one of the
computational? I think that one of the big insights that we've had on the team is that it's really not just the brain that evolved to be computational, but life itself that is computational.
That's something that I know takes some getting used to as an idea because we think about life as being the exact opposite of computers. It's squishy.
It's wet. It's unreliable. It doesn't
run anything like a program. So, what on earth do I mean when I say life is computational? Well, the old idea about
computational? Well, the old idea about life from the 19th century was that there was some kind of vital force or spirit that animated life and that made it different from matter like a rock.
That went out of fashion uh in the 19th century, of course, and and in came materialism, strong materialism that says, "No, the rules of physics are the same for the atoms in living bodies and in rocks. It's just physics all the way
in rocks. It's just physics all the way down." And therefore, there is no
down." And therefore, there is no difference between living and non-living matter. Well, that's not very satisfying
matter. Well, that's not very satisfying either because there there sure seems to be a difference between living and non-living matter. So, what could it be
non-living matter. So, what could it be if it's not physics and if it's not some vital spirit either? There is an answer to that question, I think, and that that
answer is function. What do I mean by function? Well, uh, one way of telling
function? Well, uh, one way of telling whether something has a function is to ask whether it can be broken. In other
words, if I split a rock in half, it's not like I have a rock that's broken. I
just have two rocks now. Whereas, if I destroy a kidney, I break it in half, then I have a non-working kidney. Now, a
kidney has a function. And a rock, at least a rock on a on a sterile world doesn't. What I mean by that is that,
doesn't. What I mean by that is that, you know, if I came back with this object from the future, I were some time traveler and you asked me, "What is that thing?" I tell you, it's an artificial
thing?" I tell you, it's an artificial kidney. you know, it has an operating
kidney. you know, it has an operating lifetime of 100 years, you can implant it and it filters the ura just like your kidney does. That means something,
kidney does. That means something, right? It means that well, for one, if
right? It means that well, for one, if you have kidney failure, you're going to live. So, you know, it's it's it's a
live. So, you know, it's it's it's a very real statement. Uh it's not it's not mystical, but at the same time, there is something interestingly spooky about function because it's not
something that that that matter tells you in isolation. uh you know it could be made out of carbon nanot tubes, it could be made out of tungsten filaments, who knows what. Um the point is what it
does in the context of the rest of your body, what its relationships would be with the rest of the body in the normal functioning order of things. And so it's a relationship, a set of relationships.
It's kind of ecological if you think about it. And it's something that is
about it. And it's something that is beyond the physical matter. Uh and yet it's also very fundamentally constrained by the physics of our world. It's not
like it has some kidney spirit, right?
It just works. Works means functions.
This idea of function as something fundamental was really pioneered by Alan Turing and John vonman by the the founders of computer science. They were
mathematicians. They thought about functions all the time. And you might recognize this device. It's an actual instance of a touring machine. Alan
Turing invented the touring machine. It
was a purely conceptual uh invention. It
wasn't intended to ever be built, but Mike Davyy did in 2010. A touring
machine is a is a device that has a head that moves left and right on a tape and reads, writes, and erases symbols on that tape according to a table of rules.
That's all a touring machine is. But
what Touring showed is that any computation you could do, any calculation you could do with pen and paper can be done by a touring machine with the right table of rules. And then
came the sort of genius part in 1936. He
also figured out that there were certain tables of rules such that if you wrote down another table of rules as symbols on the tape, then this table of rules
would interpret the table on the tape and compute the same thing that that machine would have computed. And that's
what makes a universal computer. In
other words, there are certain machines that can run programs and those programs can do any computation, not just a particular computation based on whatever table you've got. And that's a really
really interesting discovery because now not only has he said there is a way of specifying a function, but also that any machine that can run a program is is
calculating the same thing given the function that the program performs. So if it's adding two numbers together, there are many programs that could add two numbers together. There are many languages, many ways of specifying the table for that. They're all equivalent
in terms of the function they compute.
They're functionally equivalent. This is
cool. But Vonoman did something further uh which is he introduced the idea of embodied computation with something called cellular automata. The idea here
is rather than having a a tape and a head which are made out of something fundamentally different from the information that is written on the tape,
he said, let's imagine a world in which the tape and the head are actually part of what is written. In other words, this these worlds have a kind of physics.
They're generally they're generally rendered as grids. Uh if any of you are familiar with Conway's game of life, that would be an example of a cellular automaton. are very simple rules or
automaton. are very simple rules or physics for how each grid cell changes based on the values of the grid cells around it and you can write programs essentially by configuring uh the states
of those grids of cells. The reason that that Vonoyman was thinking about about these kind of very very simple two-dimensional physics is because he was thinking about the problem of life.
And in particular, he says, suppose that you are a robot made out of Legos, and you're paddling around on a pond that's full of loose Legos, and you want to
assemble another robot like yourself out of those loose Legos. How is that possible? Because that's of course what
possible? Because that's of course what life does. That's what every mother has
life does. That's what every mother has to do. It's what has to happen in the
to do. It's what has to happen in the seed of every plant. It's what every bacterium has to do in order to divide.
And it seems a little bit paradoxical that you, you know, that you could make something just as complex as you yourself are from parts. And so what Vonoyman realized is that in order for
that to work, you had to have inside yourself a a tape with instructions for how to build yourself. And you had to have what he called a universal constructor, which was a machine that
would walk along the tape and execute the instructions on the tape in order to make whatever is written there. and you
had to have a tape copier, a second machine, and the instructions for building the universal constructor and the tape copier had to be on the tape.
If all of those things were true, then you would have something that could reproduce. He made all of those uh
reproduce. He made all of those uh conclusions. He he made those
conclusions. He he made those predictions in 1950 before we had discovered the structure and function of DNA, which is indeed exactly that tape, before we had found the ribosome, which is the universal constructor, and before
we had discovered DNA pulymerase, which is that copier. So all of those things, you know, he was exactly right. He was
right on point. But the really cool thing is that he also showed that the universal constructor is a universal touring machine. They are one and the
touring machine. They are one and the same. It's just a universal touring
same. It's just a universal touring machine where the things that it computes with are the actual matter that it is made out of. So it's an embodied
computation. And with that, Vonoyman
computation. And with that, Vonoyman proved that in order to have life, you have to have universal computation. You
can't reproduce without computation. no
computation, no life. And this is a really profound insight and one that I think most biologists and most computer scientists still uh are unaware of. We
began doing some experiments a couple of years ago. We we published uh some of
years ago. We we published uh some of these which attempted to see how life in that very minimal vonoyman sense could
emerge out of non-life. And these are some of those results. We had to use a very minimal touring language in order to implement this. So, uh, you know, I I did some of the first experiments and I
I picked a language called brain I didn't just do it because I I do love, you know, getting in front of a lot of people and saying brain I I admit like I'm a 12-year-old on the inside.
This is a brain program. And you
can see that it's very very hard to understand, but uh it's very closely modeled on a touring machine. So it has only eight instructions and those eight instructions are move the head one step
to the left, move the head one step to the right, increment the bite at the head, decrement the bite at the head, and we're already halfway through them.
That's like four of the eight. So it's a very very very simple language, but you could write Microsoft Windows in this if you well if you were non-human. Here's
the experiment, and this experiment is called BFF for reasons that I will leave as an exercise to the listener. We begin
with a bunch of tapes filled with random bytes. The tapes are 64 bytes long and
bytes. The tapes are 64 bytes long and they're just filled with junk, filled with noise. Remember, there are only
with noise. Remember, there are only eight instructions and a bite can have one of 256 different values. So only one in 32 uh of those bytes is even an instruction at all. The rest of them are
no ops, meaning nothing will happen when it gets executed. The the head will just move on. This is random tapes. That's
move on. This is random tapes. That's
how it begins. We pluck two of these tapes. And here I'm using uh 8,192 of
tapes. And here I'm using uh 8,192 of them. Many of the experiments you can
them. Many of the experiments you can you can use only a thousand of them and all of this works. So a thousand tapes of length 64. You pluck two of them out of the soup at random. You stick them
end to end and you run and then you pull them back apart and put them back in the soup and repeat. And that's it. You just
do that. So uh in the beginning nothing much happens. I'm printing here, you
much happens. I'm printing here, you know, the first couple of dozen tapes and only showing you the instructions.
The rest of them are no ops. So they're
one of those 31 30 seconds of the bytes that don't code for anything. And the
average number of instructions that runs when you put these two tapes together is two because there just not very many instructions there. And uh I can't see
instructions there. And uh I can't see any loops here. Okay. Uh so what happens when you let this thing go? I'll show
you. Uh this was actually the first time I got it to work on my laptop and it was pretty exciting because laptops run really fast nowadays and you go from
noise to something really magical which is that suddenly programs emerge and these programs are complicated. In order
to understand what they're doing you have to really pick them apart and reverse engineer them and they've got all these loops in them and you know what on earth are the programs doing?
Well, you can tell right away that they have to be reproducing because you can see that that some of these are duplicated many times. There are 5,000 instances of that tape on the top and 297 of the next one and 99 of the next
one and so on. So, they are copying themselves or each other. And there's a lot of computation happening. There are
now 4,784 operations happening per interaction on average. So you've gone from something
average. So you've gone from something non-computational and full of noise and junk to something complex, computational and functional. Functional meaning it
and functional. Functional meaning it can break like a kidney, right? If I if I change one of these instructions, then it will cease to work. What happens if it will cease to work? Well, it won't
copy itself anymore. And so that tape will get overwritten by something that will copy itself. And that kind of tells you why life evolves. Life evolves
because in a universe capable of computation, if you figure out somehow how to copy yourself, then you will exist in the future. This is that old joke about DNA being the most stable
molecule in the universe, even though of course it's very fragile, right? If it
reproduces itself and it's still going to be around in the future. Whereas, if
you are not able to do anything to function, uh even if you're very robust like a chunk of granite, the best that can happen is that it will take a long
time for you to fall apart. So that's
why life persists. We go from here to here. And uh it doesn't even take that
here. And uh it doesn't even take that long. This is after 5 million
long. This is after 5 million interactions of 8,92 tapes. This is what that looks like. I'm drawing here a a dot. I reversed the colors. So this is
dot. I reversed the colors. So this is white dots on a on a on a black background. There's a dot for every one
background. There's a dot for every one of the first 10 million interactions of this soup. And uh time is on the x- axis
this soup. And uh time is on the x- axis and the y-axis is number of operations that ran. And you can see that right
that ran. And you can see that right about at 6 million interactions, something really changes about the soup.
Uh it looks like a wall of white. That's
what's on the front cover of the book.
That is a phase change. It's a phase transition. If you think about this like
transition. If you think about this like a physicist, what's on the left is like a gas, meaning that all of the bites are decorrelated from all of the other bites. They're all independent. And the
bites. They're all independent. And the
way you can see that they're decorated is if you try running the soup through zip, right? you compress it and it's
zip, right? you compress it and it's uncompressible because if you have a bunch of random bytes they don't compress at all. Whereas right after that transition you can compress the hell out of it. It compresses down to
about 5% of its original size. Uh it's
obvious that it'll compress if there's a bunch of copying going on because you know anytime things are copied then it doesn't you don't have to write all the bites out, right? You can just refer to uh to one of the one of the originals.
So it's a phase change. If the phase on the left is gas, what is the phase on the right? It's life. You could call it
the right? It's life. You could call it machine phase, you could just call it life. Life is a very special phase of
life. Life is a very special phase of matter because unlike a solid or a gas or liquid, it has structure at every scale. It's got complexity that looks
scale. It's got complexity that looks different when you zoom in or when you zoom out or when you look at a different place. So the tentative conclusion is
place. So the tentative conclusion is that pretty much any universe that has a source of randomness and can support computation will evolve life.
But there was really a a puzzle in these results which is how on earth does it happen so fast? How can we get these really complicated programs uh in only a
few million steps with only a thousand tapes of length 64? It just seems implausible. And it seems especially
implausible. And it seems especially implausible because in the original experiments I used mutation. So I
imagined that there were you know sort of cosmic rays you know randomly changing a bite here and there every now and then. But this actually still works
and then. But this actually still works even if you crank down the mutation to zero. So with zero mutation you still
zero. So with zero mutation you still get life. It takes a little bit a little
get life. It takes a little bit a little bit longer but not much. And actually
the the life that you see keeps on getting more complex. You you if you were looking closely at the running program you might have seen that you saw structure emerge and then you saw more structure emerge and more code come in.
How on earth could that be happening?
Because once things can copy themselves you would think you're done. But it's
not done. Well, the answer I think comes from a very very fundamental result in biology which Lin Margulus figured out in 1967. Her paper in which she wrote
in 1967. Her paper in which she wrote about this result was was rejected from a lot of journals before somebody finally accepted it, a journal of theoretical biology and it was called on the origin of mitosin cells. She was the
one who proved that mitochondria were once free swimming bacteria. And she
popularized the term symbioenesis to talk about what what was going on here.
That two life forms that previously were independent came together and made a new life form. An archa and a bacteria came
life form. An archa and a bacteria came together and made a new single-sellled life form which are the ukarotes that we are all made out of. Margulus believed
that this process of of symbioenesis was the engine behind evolution. Turns out
that she was right about about mitochondria and the establishment in biology sort of finally came to recognize that. But nobody really bought
recognize that. But nobody really bought her larger um thesis that this was the engine behind evolution. Uh and she remained very much in the in a tiny minority of people who believed that uh
even by the time of her death uh in 2011. Could symbioenesis be happening in
2011. Could symbioenesis be happening in BFF? Yes, it is happening. And the way
BFF? Yes, it is happening. And the way you can see that is by looking at not whole tapes reproducing but little strings reproducing. Maybe only one bite
strings reproducing. Maybe only one bite reproducing. Occasionally a single bite
reproducing. Occasionally a single bite will reproduce even right from the beginning. Because if you have
beginning. Because if you have instructions that can change a value somewhere else in the soup, once in a while an instruction will change a value somewhere else into another instruction.
And that's a very very lame but nonzero form of reproduction. So you have these little things reproducing from the beginning. And what what I'm showing you
beginning. And what what I'm showing you here is all of the reproducers in a particular soup. But you can see that
particular soup. But you can see that there's actually a lot of stuff happening during this splits from an ancestor into into descendants. But this
is a tree that goes the other way. It's
like the roots of a tree. Things come
together and symbios and form larger things. So uh that's exactly how the
things. So uh that's exactly how the complexity happens. And symbioenesis is
complexity happens. And symbioenesis is what gives evolution its arrow of time.
Because if you think about it, evolution in the standard Darwinian sense doesn't have any sense of more or less complex. Uh you know, if you if you
less complex. Uh you know, if you if you uh evolve, you will fit your niche better, but that doesn't mean you'll get simpler or more complex on average. The
the the average is roughly zero. You
might change your beak shape to adapt better to this or that flower. But when
you have a symbiogenetic event, two things that already are reproducing themselves come together and can reproduce together. And that means that
reproduce together. And that means that some extra information has to get added in, which is how do we get on together?
How do we fit together? And it's that extra information that's adding to the complexity of what comes next. And that
gives evolution tarot of time. We know
that there have been a number of other major evolutionary transitions where where things came together to make stuff more complex. For instance, we are
more complex. For instance, we are multisellular and that was a major symbioenetic event, right? How did how did single cells, single ukarots become multisellular animals like us? It's
obviously a symbioetenetic event.
Yoursmari and John Maynard Smith uh wrote an article in Nature in 1995 that reviewed what they what they saw as the eight major transitions in life on
Earth. Uh and these are definitely all a
Earth. Uh and these are definitely all a big deal. they've added a few to their
big deal. they've added a few to their original list. But if if what we're
original list. But if if what we're seeing in in systems like BFF is any indication, this is actually something that happens all the time. It's not just these major transitions. There is a whole cascade of
transitions. There is a whole cascade of mergers and combinations that are happening continuously and they are actually what leads to the complexification of life as a whole. Do
we see any actual evidence of this in biology? Well, this is very much ongoing
biology? Well, this is very much ongoing area of of work. But uh but here's some evidence. This is the human genome. And
evidence. This is the human genome. And
uh the big surprise when we first saw the human genome sequenced in in 2001 is just how little of it actually codes for the proteins that make us up. It's only
about 1.5%.
The rest of it is so-called junk DNA.
It's not really junk. Some of it is regulatory. Some of it we don't know
regulatory. Some of it we don't know what the hell it's doing. But what's
really interesting is that those big sections called LTR retrotransposons and DNA transposons and lines and signs, that's all viruses.
Basically, it's replicators that replicate inside our DNA and that have burned themselves not only our sematic DNA like a like a a classic retrovirus,
but into our heritable DNA and become part of our genome. And we know that some of those viral elements, indogenized viral elements are doing really important work. So for instance,
the placenta is made uh out of a virus that fuses the membranes of cells together. We know that there is a virus
together. We know that there is a virus called ARC, which if you knock it out in mice, they stop being able to form memories. We know that parts of the
memories. We know that parts of the immune system were made this way. There
a few dozen results like that that have all been coming out in the last 1015 years. Uh and and there are more and
years. Uh and and there are more and more of them all the time. And uh and when you look at our at our DNA, it doesn't look like one thing that has been copying itself. It looks like a medley of things that have copied and
fused over and over and over. It's not
just neuroscience that's computational.
Life was computational from the start.
And it gets more computationally complex over time through symbioenesis. Right?
Because we put together the two ideas that I've just that I've just shown you that you know life is always computational because it has to copy itself and that that's a general purpose computation. and the fact that
computation. and the fact that symbioenesis is really important. Well,
you now have two computers that have come together and parallelized and and what that means is that you have greater computational power every time you
undergo a symbioetenic event.
So, symbioenesis makes the computation massively parallel. It's not quite the
massively parallel. It's not quite the same Moors law that we had on Earth in Silicon Valley uh between 1950 and 2006 because you know then we were making
transistors smaller by the way AI didn't progress anywhere between 1950 and 2006 but when transistors stopped becoming when transistors are still still getting
smaller but but we stopped being in a situation where making them smaller uh could make them clock faster and so around 2006 all the chipmakers began to do the only thing they could, which was to put a lot more cores on the same chip
and parallelize. And that's when AI
and parallelize. And that's when AI began ticking off. This is not a coincidence. Parallelism is exactly what
coincidence. Parallelism is exactly what it takes in order to make neural net-based AI work. And that's that's why the deep learning revolution happened when it did.
So, uh, computing for growth and healing and replication is modeling your own body. That's life. What about modeling
body. That's life. What about modeling your environment? That's also needed in
your environment? That's also needed in a dynamic environment. Well, that's what intelligence is. Of course, life was
intelligence is. Of course, life was intelligent from the start because of course you don't just have to make more of yourself. You also have to find the
of yourself. You also have to find the parts to make more of yourself. The
Legos don't necessarily just float around around you. You maybe need to find them, hunt them down. What about
the energy that it takes to compute?
Computation is energetically expensive.
You're creating negative entropy when you compute. Uh and in order to do that,
you compute. Uh and in order to do that, you need to ingest free energy. That's
why we all metabolize because we compute. That's why when I run BFF, my
compute. That's why when I run BFF, my computer heats up. If you if you run simulations in which you just uh take random programs that can swim uh left, right, up or down, and you just see
which ones survive in an environment where they're getting energy from that light, the ones that survive are the ones that learn to follow the light.
That is really just a way of saying you have to model your environment, too. And
you have to figure out how to make your behavior uh consistent with one that will allow you to do the copying that will allow you to reproduce. And that
sure enough, that's exactly what bacteria do as well. That's a sugar crystal in the middle. And bacteria that swim have learned how to swim toward the sugar. I've been talking so far as if
sugar. I've been talking so far as if we're in single player mode. But of
course, whenever you have one bacterium, you have more bacteria. And if you don't have more bacteria yet, you will in 11 minutes, right? So life is a multiplayer
minutes, right? So life is a multiplayer game. And it's never single player. The
game. And it's never single player. The
most important parts of our environment to model are each other. A lifeless
universe is one where you don't have to think very hard. But the moment you start to have a lot of other agents in your environment that have their own energy that they have to get their own stuff they've got to do, your interests
can align with theirs, can misalign with theirs, and now you've got to get smarter because you don't just have to model yourself, you also have to model
them. and they're modeling you back. So,
them. and they're modeling you back. So,
we've been doing a bunch of work uh recently on the team. Uh this is actually not in the book because uh it's it's a little too recent, but it's called uh multi-agent universal predictive intelligence. And this work
predictive intelligence. And this work is really about the field called multi-agent reinforcement learning in which you have a bunch of learners that are all trying to learn to do something
together uh based on being individually reinforced uh on the on the basis of some score that they get. And the
question is uh you know how can they learn to work together? How can they solve things like the prisoners dilemma?
Well, that turns out to be a very very hard problem for classical reinforcement learning because ordinary reinforcement learning only learns from the past. And
and that's fine if you're if you're playing a video game and the video game stays the same as you adopt a new strategy. But if there are other players
strategy. But if there are other players in that video game world with you, then when you change your strategy, they're going to notice and change their strategy. So the statistics of the
strategy. So the statistics of the environment are not constant and they're learning too. So you have to learn about
learning too. So you have to learn about them and you have to learn to predict what they're going to do in response to what you do and you have to learn that they're learning and that they're also predicting you and that they're predicting you predicting them and that
you're predicting them predicting you predict and so on. So uh so this is a really hard problem and and the the the paper it's gotten to be a 100 pages uh or so and has some very very complex
math in it because modeling an environment that includes the thing that is modeling the environment and all the things in the environment that are modeling you back turns out to be a difficult problem. But the team has
difficult problem. But the team has figured this out and and the results are really cool. The way you you do this is
really cool. The way you you do this is by getting rid of the idea that you are outside the video game and putting yourself in the video game. So in other
words, you have to not only model the environment uh like you know Alph Go does where you know you're you're thinking about a go game or a chess game and you're just imagining the game. You
have to imagine yourself playing the game as part of the environment and you have to start to predict yourself and predict others. The reason is we have a
predict others. The reason is we have a face if you like uh you know when I smile uh I know what I feel like on the inside because I've built a model of
myself and when I see you smile I can guess that you're happy too and the only way that I can make those kinds of inferences is by knowing that we're similar uh by knowing that I also have a
face and I do that when I'm happy. And
and it's that ability to empathize, to model the minds of others that is at the core of being able to solve the multi-agent reinforcement learning problem. I think that this actually kind
problem. I think that this actually kind of explains why we've got consciousness.
Uh in the sense that, you know, consciousness is often thought about as some kind of weird epiphenomenon. You
could have a, you know, philosophical zombie or something that behaves identically to us but is dead on the inside. I don't think that's true at
inside. I don't think that's true at all. I think that the reason we are
all. I think that the reason we are conscious is because we are modeling ourselves as well as modeling others as well as modeling others model ourselves and so on and so forth because that is behaviorally essential because it's
functionally essential in order to allow us to cooperate with each other. And
when you do that, when you embed yourself in the world and you think about others like you, you're able to solve problems collectively. And this is essential in order to have symbioenesis,
in order to have symbiosis with those others, and in order to create a larger entity. I'm not saying exactly that I
entity. I'm not saying exactly that I think that your cells are conscious, but I'm saying that they definitely have models of the rest of your body uh or of the other cells around them in order to be able to collaborate with them. And
that's, you know, maybe a baby step in a certain way toward consciousness. When
it comes to very complex bigrained animals like us which have tons of neurons that have come together through an an act of symbioenesis and and we want to work together in order to make bigger things happen. You know when we
talk about human intelligence we you know we we imagine things like we figure out how to transplant organs and and how to go to the moon uh and and how to build computer chips. None of us can do these things on our own. That
intelligence that we're talking about is the superhuman intelligence of our collective symbioenetic entity. And in
fact, it's not even just a human entity.
It includes cows and wheat uh and all sorts of other entities as well as steam engines by the way without which we wouldn't exist. That super entity which
wouldn't exist. That super entity which has arisen through us being conscious enough of each other to build models of each other. That is what has resulted in
each other. That is what has resulted in this in this explosion of intelligence in what we think of as humanity over the past 10,000 years. And that's that's also what allows one to solve the uh
psychological twin prisoners dilemma.
Meaning cooperate a priori with another actor in order to solve these game theoretic puzzles that involve mixed payoffs. Very old classic problem. If
payoffs. Very old classic problem. If
you think about this from the classic perspective of game theory as uh actually John vonoman invented and as was refined by John Nash uh later on in
the 20th century. Uh these this is sort of rational economic actor ideas about uh about how people interact if they were if they were just optimizing for themselves. The the solutions are very
themselves. The the solutions are very grim. These Nash equilibria uh are
grim. These Nash equilibria uh are essentially selfish and prevent any collaboration. But if you imagine that
collaboration. But if you imagine that others are like you and also will change their strategies in response to your strategies and so on then a new set of equilibria emerge from this kind of
thinking that are that are much more cooperative. Symbiosis and symbioenesis
cooperative. Symbiosis and symbioenesis requires modeling ourselves and modeling each other and we have to think about each other as as if those others were like ourselves. That's where theory of
like ourselves. That's where theory of mind comes from. By the way, large language models have theory of mind. uh
they kind of have to in order to be able to carry on conversations. Uh right. So
when when you're interacting with a large language model, you have to think about what you've got to tell it and what you don't have to tell it because it already knows and and and so on. And
it has to do the same thing back to you in order for that interaction to succeed. Those things have been learned
succeed. Those things have been learned by observing tons and tons of interactions between people which is what the training data consists of.
So uh starting with simple bacterial quorum sensing and multisellularity and so on since every living entity is computational as they combine they parallelize and that does lead to a kind
of Moor's law and it leads to more and more cooperation on larger and larger scales. There really is this kind of uh
scales. There really is this kind of uh Moore's law progress that um that I that I was so dismissive about when I first heard it down here in San Francisco.
These increases in brain size that have happened during human evolution are a result of exactly those dynamics. This
is the last 7 million years or so. There
have been explosions in brain size.
Those have been observed in various other social species as well in citations and whales and dolphins and uh in bats in certain species of birds. And
the reason uh is that if you share DNA with another entity of of your of your species and you get smarter to model them, you also become harder to model and and they're getting smarter as well.
So now they have to model you back and it's a kind of friendly arms race. Well,
how friendly it is depends, right?
You're also competing for mates and prestige and all kinds of other Mchavelian stuff, but also you're trying to collaborate, right? in order to get things done collectively and all of that leads to an an explosion in
intelligence. This is uh some classic
intelligence. This is uh some classic results from Robin Dunar uh showing the relationships between cortical size and the size of of troops among monkeys and apes. They're correlated, of course,
apes. They're correlated, of course, because if you're able to model more others, then you're able to form a larger troop before it falls apart.
That's why, you know, having a bigger brain doesn't just let you have a larger troop, but also have greater collective intelligence, which then forces the brain once again to get bigger. So, uh,
scaling cooperation and competition is how we got these big brains. It's also
how we came to be able to recognize ourselves in mirrors. Uh, as many of you probably know, the mirror test in which an animal, you know, is able to recognize in the mirror not just that
that's another chimp, but that that's me in there and then check themselves out.
But you know there there only a handful of other animals that do this u because the level of sophistication you need in order to realize that uh you know not only are there other beings like you in
the world but that you are also a being like the other ones that you see and to be able to sort of make that mapping right that that's you in the mirror is quite sophisticated. It's quite a quite
quite sophisticated. It's quite a quite a sophisticated active theory of mind and we do it together with each other all the time. Uh when you think about uh a rowing crew for instance and the way
they can sometimes achieve uh what people in in crew called call swing where you know they get in sync so perfectly that everybody is anticipating the behavior of everybody else perfectly and it feels like the the boat acquires
a kind of soul if you like that is basically a computational process in which they've achieved a kind of group consciousness. I just want to say a
consciousness. I just want to say a thing or two about human AI symbiosis because that seems to me where we're headed. I hear a lot of talk uh among uh
headed. I hear a lot of talk uh among uh two camps uh about AI and our future with AI. Some people more aligned with
with AI. Some people more aligned with ideas about AI ethics think that AI is is fake uh that it's not real intelligence or that that this is uh
somehow uh counterfeit version of of intelligence or just statistics uh and are concerned with uh you know with various uh issues about um justice uh that are that are related to how AI
behaves um or fools people into thinking that it's real. And then there are the existential risk folks who uh have gone from being rapture of the nerds. You
know, we're all gonna we're all going to go to heaven and be immortal and upload our brains to the apocalypse is coming and we're all going to die because the AI is going to uh is going to take over.
And I think that these are these are both wrong perspectives. U the idea that that there's a dominance hierarchy between species is not how things have tended to work in in life on Earth. And
I think that we've been fooled into thinking that because of an overly classically Darwinian perspective on how evolution works. Uh you know if you're
evolution works. Uh you know if you're just doing classic Darwinian evolution then a mutation is uh you know something that is only a little bit different from uh from the wild type and they will
compete and whichever can out compete the other one wins and the other one dies. But in a symbiotic world, in a
dies. But in a symbiotic world, in a symbioetenetic world, things are combining to make larger structures all the time. And it's not so clear where
the time. And it's not so clear where one thing ends and another begins. And
cooperation is uh is just as important to force as competition. And I think that I think that that's uh you know, very much the story of how we came to exist. Uh and as far as I can tell,
exist. Uh and as far as I can tell, that's very much the story of what's going on with technology and humanity as well. You know, I mentioned near the
well. You know, I mentioned near the beginning of this talk that if there were no machines, most of us in this room would not be here. We were about 1 billion people around the time of the
industrial revolution. And uh right
industrial revolution. And uh right after those machines which externalize metabolism by by burning uh fossil fuels, right after they uh came on the scene, our numbers exploded by nearly a
factor of 10. Why is that? Well, we know that we make the machines, but also the machines have made us. Marks and Engles talk about this when they talk about the, you know, people springing out of the ground like wheat. Uh that is
literally true. It's all of that
literally true. It's all of that additional free energy that came from uh from burning fossil fuels that resulted in all of the humans that we've got. And
not only did it result in much greater numbers, but this plot, which is from an economics paper just just published very recently, shows on a log log scale the real wages of people versus the population. And what you can see is that
population. And what you can see is that throughout the Middle Ages, we were oscillating, trading off between population and and wages. This was
essentially a Malthusian trap. In other
words, we were constrained energetically in our numbers. Uh the population was constrained. And the moment we began to
constrained. And the moment we began to metabolize externally, we shoot off to the right. Suddenly, both numbers and
the right. Suddenly, both numbers and quality of life rise dramatically because of all that extra energy that is liberated because that's what intelligence ultimately does. uh right
whether it's photosynthesis or the invention of steam engines or of nuclear power and so on the more intelligent you become the more sources of energy you're able to tap and the more additional
levels of symbioenesis you're able to achieve so uh I see no reason to believe that that AI is poised to be any different from uh from all of those previous symbioses I also don't see us
as being distinct from the technologies that we make we think of humanity in terms of the individual person but we're already not where everything that we've made and that co-constructed us and you know we didn't achieve artificial
intelligence until we literally began to train it on all of the human output in text that we've generated on all over the internet. Uh what could be more a
the internet. Uh what could be more a part of us than that? I'm going to end there and uh and and Benjamin and I I think we'll we'll shift into conversation mode.
>> I will you give us plenty to talk about.
I don't think we'll have a problem with this. I wanted to talk a little bit
this. I wanted to talk a little bit about the arrow of time.
Um, and particularly the arrow of time as one that operates that we can map through not just increasing complexity, right? Light is the ability, you know,
right? Light is the ability, you know, fighting entropy and so forth. The
increasing complexity, but also increasing complexity that seems that goes through phase transitions. Yes.
Right. And so um but the phase transitions are ones that and I think you've made the point quite clearly retains what came before. Right. Right.
It's not just like okay done with the old here's the new. But that all of this came before us already. It's all inside us.
>> Yeah.
>> It's all still here.
>> That's that's the amazing thing. like we
are actually, you know, societies of bodies of colonies of conjoin bacteria.
You know, all of us are just bacteria in this room, right? They're still here.
>> Uh and you know, they're just nested like matrioska dolls. And even if you zoom in, you know, you zoom into the bacterium, uh and then you zoom further into the mitochondrian, what you actually see reproduced in the mitochondria. And Nick Lane made this
mitochondria. And Nick Lane made this point very beautifully in his in his book transformer, >> right?
>> Is the conditions of the deep sea vents where those mitochondria first evolved.
So it's almost like they artificialize, right, in your language the environment that they originally evolved and then they create capsules around themselves.
So yeah, it's it's it's sort of shells within shells within shells.
>> And this I mean this in your mind this is since Sarah Mari Walker's book here earlier in your mind this is this rhymes with assembly theories idea of the sort of the persistence of these things over
time. It does. Yeah. So I mean and the
time. It does. Yeah. So I mean and the same is true of technology by the way.
So if you look at I I give in um in the book the example of the hafted spear. Uh
so you know if you have a um stone point uh at some point there's this innovation in which uh some clever cave person decides to tie it with a senue to a stick and now you have a spear.
>> So you can't have a spear before you have stone points just like you can't have a ukariote before you have proariots that can come together.
>> Right. And this is the arrow like there is a like there's >> there's an arrow.
>> This is there's a certain degree of nonreversibility.
>> Yes. And that's exactly why it uses energy because anything that is irreversible consumes free energy.
>> That's right. Okay. So here's what in the Smith Murray and Manor Smith slide that you show like they identify speaking these phase transition the eight key transition what they see as
the major transitions in evolution.
Right. And you uh point to these and I think could show how each one of these is built on symbioenesis.
>> And and they and they said they said that as >> and computer genesis too.
>> Yes. That they didn't say >> that they didn't say. Um
>> but they they they also if if you like halfway between you know what Margulus said which is simogenesis is important and what I'm saying which is it happens all all the time.
>> Yeah. Right. Right. Right. Right. and
and so they're identifying some of the really big ones, but when you start zooming in, you realize that they're happening all over the place. I mean,
every one of those lines and signs and endogenized viralis is one of those events. Or even like termites, the fact
events. Or even like termites, the fact that termites can eat wood is because they have they're they they engage in a symbioenesis with a with a an organism in their gut that actually does the
digesting of the wood. So, um, you know, it's sort of like a um a power law, you know, where they're looking only at the top right of that power law, but it's an entire So, it's a gradient. It's a
gradient all the way down. Yeah. Okay.
But speaking of stages and and these sorts of phase transitions here as well like you showed with the with that's that six million operations for the brave >> looking at when I ask you a little bit
about to ask you to prognosticate a little bit about the future of intelligence.
Do you see the the longer term the symbio genetic relationship between evolved human intelligence and
mineralbased intelligence that we that we have constructed as something like um the ninth stage?
>> Yeah, I do. Um I I think >> why and why so like what would be the criteria by which one could say yes or no to that?
>> Well, I guess how big a deal it is >> on a planetary scale. was, you know, termites were a big deal, but the industrial revolution was maybe an even bigger deal. You know, the fact that
bigger deal. You know, the fact that that these big deal changes are happening more frequently, uh, by the way, is also something you would expect from the dynamics complexification, >> right? Because the more things you've
>> right? Because the more things you've got that have come together, the more parts you've got on the table that can now come together. Uh, W. Brian Arthur
has talked about this in the context of technologies.
That's same exact process.
>> That's right. That's right. All right.
Um is there anything else you would want to say about before we move on from this about the future of intelligence like where do you >> other than it will grow >> other than it will grow and it'll become increasingly complex and that we will be
scaffolds for something >> that that we humans will I mean I'm just to sort of like frame the question that as opposed to thinking about a lot of times the way in which this is thought
through is in terms of a language of posthumanism right that there's humans they had the run and now there's going to be something else that takes over, right? Even locked.
right? Even locked.
>> And I do disagree with this perspective >> because humans because everything persists >> because everything persists. Everything
is still there.
>> Please draw it out.
>> Well, um, you know, there are still bacteria after there are ukarotes.
>> Right.
>> And in fact, the number of niches for bacteria and the and the varieties of bacteria have greatly increased as a result of ukarotes coming on the scene.
>> Right. Right. Right.
>> And the same is true of of of ukarotic single cellled organisms. when multisellular ones come along, you know, suddenly the guts of multiselled ukarotic organisms are these incredible new environments and and they, you know, create all kinds of other environments,
right, for single-c cellled life. So
>> the niches and the environments grow and the things that were there before generally are still are still there in the future too. So that symbiogenetic relationship would be one in which there
would be a construction of new niches of which we would be part and we would persist as part of a larger complexity >> that we are in fact ourselves bootstrapping in a way. Is that a fair
way to yes what you're >> saying? I I don't want to sound too
>> saying? I I don't want to sound too polyiana. I mean you know there are you
polyiana. I mean you know there are you know aggressive symbiosis there are die-offs you know like dramatic things happen in the history of the earth as well right there are collapses. I don't
want to minimize any of that but but the the pattern you know modulo that there are snakes and ladders uh is that is that things get more and more composite and more and more complex. The idea that
that because there's a new kind of entity, we're going to get replaced by it strikes me as as, you know, using dominance hierarchy thinking which is like all about how like monkey A, you
know, decides or doesn't decide to fight with monkey B for the mate or something, you know, like generalizing that idea AC across.
>> I find it interesting that your book and Yukowsky's book come out around the same time. There's a bit of a um >> you know, yeah. Um
>> there'll probably be some shared readership and I >> we have we have to set up some sort of fish to cuss. Yes, I think I think on on this as well. Um I was
also struck by the line that you said um what it's like to be a next token predictor, right? Which you know a certain kind of
right? Which you know a certain kind of philosopher would call qualia, right? Or
this experience of experience or one's experience of your experience of your experience or something. and and and the way in which you set this up is that well we know the answer to what it's
like to be a next token predictor because we know what it's like to be us.
>> Yeah.
>> Um but transformer models and all of their their descendants uh are also next token
predictors in a way without using the C word necessarily that is consciousness.
>> You're going to get me in trouble now.
No, I don't want you because it's this it's such a loaded term that it comes with such baggage that may not really be what we're looking at, right? I I you
know, as we I think discussed like there it's part of the reason why a new kind of school of thought is needed because there's all these things happening right in front of us that we all point to, but
we're all kind of arguing over like which 17th century word we should use to to call it.
>> Exactly.
>> So maybe C is not so helpful here.
Anyway, I'm just But what is your intuition, if that's the right word, about what kinds of similarities and differences there might be between being one kind of next token predictor versus
being another kind of next token predictor? And is there another way in
predictor? And is there another way in which you see that that that kind of spectrum of difference and similarity that doesn't require, you know, maybe there's some sort of legacy metaphysical
legacies to get at it? Well, I will try and map this a little bit onto that legacy. Yeah.
legacy. Yeah.
>> So, there are qualia in philosophy speak which are like red uh you know or apple or something you know and not just apple but like you know what an apple is like and what it's like to crunch into it and
so on. You know what redness is like.
so on. You know what redness is like.
>> Um you know why do we have experiences of red? Well, it's it's obvious why we
of red? Well, it's it's obvious why we have experiences of red because it's behaviorally relevant for us to have those experiences. uh you know Ed Young
those experiences. uh you know Ed Young has written you know very eloquently about this about how you know different species of animals right >> immense world >> immense world right uh you know any any given animal species
>> learns to model what matters >> for for that species to continue to exist in the future so we care about red because ripe fruits are red because blood is red uh because red matters uh
to us and and so you know of course we have qualia of that or and hunger same thing right you know you know when you start to get hungry like you better goddamn meet or else there's not going to be a you to you know pass on your
lack of a model right so so um we have qualia for very good reasons >> and um and then there is self-consciousness like what's the what's what's that all about well when
you start to when you start to model you know use theory of mind to model others and model others modeling you and model yourself modeling others modeling you and so on then you know it's not we're not just talking apples and redness you
know we're talking >> people uh and including a So um you know for me that is a a very functional straightforward account of
what we what we mean by um by consciousness.
>> Okay.
>> Um now does that mean that that you know consciousness feels or is the same thing for a language model as it is for us uh that is for an individual human? Uh no I
I don't think so. I mean uh companies you know have something like a consciousness as well right? They have
to model other companies. they're
competing, cooperating with them and so on. Does that mean that you know
on. Does that mean that you know companies are conscious the same way we are?
>> I imagine not. But these things are also all relationships you know I it's it's hard for me to even say what is true in an absolute sense about a company because all we have are that network of models of each other and of each other's
models.
>> Okay. It is one question that that a number of people ask me to ask you um has to do with with energy. There's a
lot of discussion as we were saying you can't swing a cat so to speak which um without hitting an offender an offender I don't recommend it um or think piece
about how much energy and water AI uses >> and if we're thinking about this appearance of AI as a planetary scale phenomenon and it's part of a planetary
metabolism that uses energy that dissipates heat that produces information that absorbs information that it it's a it's hungry.
There's nothing virtual about it in this way as well. Um but your thoughts on this are a you come at this from a somewhat different perspective. Not only
because you think that if I understand it some of the ways in which the questions of energy and water at least in the short term may be mis um misinterpreted or misconstrued. Um but
there's um there's other ways in which you think about this in a sort of longer term. Let's say the 50 year or 100 year
term. Let's say the 50 year or 100 year cycle. Could I could you correct our
cycle. Could I could you correct our thinking on this and how should we be thinking about that relationship?
>> Yeah, I can try. So So first of all, there is a lot of work to do on efficiency of computing uh for neural computing in particular. I mentioned
that in 2006 something really big changed which is that we stopped um kumi scaling which is to say frequency scaling of semiconductors and so we began to have to parallelize but the initial version of parallelizing was
just put more more processors more serial processors of exactly the same kind uh you know on the same chip and that's kind of a dumb way of parallelizing we haven't become natively neural in the way we compute with
silicon so um uh you know I I know at least what's been happening at at Google is that we've had orders of magnitude of improvement in the efficiency of Gemini models for instance sure over the last
couple of years um through you know basically you know doing the work of figuring out how to compute properly you know even with the same fundamental you know transistor based technologies for
parallelism right and I think that there are more orders of magnitude to be one there probably a factor of a thousand >> of a thousand okay >> that would be my guess um based on just
back of envelope calculations you know we we also know that what we've already gotten to now is is better than what uh than what a lot of people who are concerned about those environmental
effects claim. And um you know I'm I'm
effects claim. And um you know I'm I'm very sensitive to the to you know to to the environmental crisis. So I I don't say this as somebody who minimizes those
problems but um you know there there are a lot of places to look you know for you know where we're doing dumb things with respect to carbon uh you know other than AI as well. And you know the concern
with AI is really the rate of exponential rise more than it is the value.
>> Uh and um the issue there is that uh you know we we can only see we we can only make good estimates of the sources of energy and the methods that we kind of
know are already in the pipe. Factor of
a thousand great exponential rise will you know will eat up those orders of magnitude fast.
>> Yeah. Yeah. There's a Javon's parag kind of dynamic there. So, so what then?
Well, we also know that intelligence unlocks new forms of energy as it always has. Um, I I think that, you know, it's
has. Um, I I think that, you know, it's likely that that fusion uh will get cracked with help from AI over the coming years. That would be great and
coming years. That would be great and that would that would really change the game with respect to a lot of energy problems on Earth and environment problems on Earth well beyond AI. Um
also um you know as as I think you've written uh you know all of our energy ultimately modulo a few nuclear isotopes in the ground is solar and the amount of
sunlight up there is vast vast vast and the enormous majority of it radiates out uh you know into into space never touches uh you know a planet or a sighteline of ours. Mhm.
>> So um you know so I I I think a lot not only about how to work on the demand side of energy but also the supply side there's actually a lot of energy in the universe.
>> Yeah.
>> To be used. Okay. So I'm going to turn to the some of the questions we have. Um
>> a little bit I'm going to go a bit over just to um fair warning. Um we have a question from Stuart Brand. Um hello
Stuart. We wish you could be here. Um is
looking ahead a general brain function?
Eyesight is largely conjectural. Look
ahead. Multiple guesses at what is being seen followed by confirmation often with sketchy data. L&M seem to work that way.
sketchy data. L&M seem to work that way.
What else does?
Okay, that's a really nomic question from Stuart.
So, um, uh, I'll try. Uh, yes. Um,
predictions are always conjectural. uh
you know and and the fact that we that we we try to predict what's and and you know I maybe I wasn't quite as explicit about this as I should have been but you know when I say we're next token predictors what we really mean by that
is we're trying to model the relevant parts of our environment. Why um why do we try and do that? Well relevant means things that we could act in order to um
you know that in in ways that that will matter for us in the future. And um and so you know the not only do do we have to be able to um make meaningful decisions based on the observations we
can make say via vision or whatever but also what we then see has to change as a function of our behaviors. So that whole loop has to exist cybernetically in order for any of this to make sense.
>> Okay.
>> Um now you know does that involve an act of of guesswork? Of course it's it's an act of imagination. Uh and and this is this is one of the reasons that um you
know that for instance we see hallucinations uh in in LLMs. It's it's impossible.
>> It's imagination.
>> It's imagination. Yeah. It's impossible
to have prediction, right? Or even to recognize objects without >> you really don't want to get rid of them.
>> No.
>> The worst thing would be an LLM that can't do anything.
>> They can't imagine anything.
>> Yeah.
>> Yeah. No, that doesn't mean that there isn't plenty of work to do. Getting the
accuracy of of those better. Also this
sense of confidence in the confidence needs to improve >> and and has quite a lot in the last few years but there's still there's still a long way to go >> and some of it obviously has to do with how we interpret use them and interpret
them and this well yeah >> okay um okay next is from uh Darren Zu um one our an antitheran one of the original anticotherans
>> how and where do you see symbioenesis occurring in foundation models today is it mediated at the infrastructure level or more at the cultural uh at the cultural level.
>> Yeah, that's a great question. There's a
very literal sense in which we're seeing symbioenesis in the models which is that um there there are a lot of mixture of experts uh kind of models uh being done nowadays mixture of experts. It's
actually a bunch of models working together. That's one of the ways of
together. That's one of the ways of scaling. Uh so we're essentially
scaling. Uh so we're essentially rediscovering social scaling in um in in models. And in fact uh there's a pretty
models. And in fact uh there's a pretty cool paper from I think last year showing that even if you train a giant monolithic model if you look inside it
you see that it has done uh functional differentiation. In other words you know
differentiation. In other words you know it you know what you've actually done is to train a little ensemble inside >> uh in the same way that our brains are ensembles of you know regions in >> lots of vertical columns all fighting it out with each other. Yeah. Yeah. Okay.
>> Well fighting cooperating modeling each other modeling so you know specializing and so on. Right.
>> Right. It's society's all the way down.
>> Yeah, it's society's all the way down.
>> Nice. Okay. Um, this is this is a great question to sort of end with. Um, and
I'll sort of invite you to, you know, take as much rope on this as you as as tether as you'd like. From Angela
Gronitz, um, what role do you see human creativity playing as AI advances?
So um first of all I think that um you know if I think about this as as a you know as a person who imagines himself to be creative as well. I mean I
I you know I I do think of my writing as as as creative output. I think that um being an artist has become economically difficult in the last 20 years for a variety of reasons uh which have a lot
to do with the PTO distribution of rewards to uh to artists and the consolidation effects.
>> It was never actually super stable but yeah >> it was never super stable. Usually
people had to augment augment their >> artist exists for a reason.
>> Yes, that's right. Um and you know and and maybe that has a role as well.
>> Oh yeah perhaps. Yeah. Yeah. But um
David Cop recently passed away. Uh he
was a um composer who as far as I know was really one of the first ones to really take computer composition seriously. Um not the first I mean Terry
seriously. Um not the first I mean Terry Riley who who did the um who composed the piece that that played for our long short um you know that that that piece
was from 1972. So anyway, David Cop uh he began using um uh statistical NLP, natural language processing type ideas
to do uh composition when he was suffering from composers block. He had a a commission that he was supposed to uh he was supposed to write I think an opera uh and um and he began in his
studies taught himself how to code um in the early 80s. I I mean I do this kind of as well like when I'm procrastinating I have to like be doing something else that I convince myself is productive or whatever. Of course. Yes.
>> And and so he spent a few years doing that and then he finished his commission in six seconds uh when he finally got the code uh running >> and everybody got really pissed off at him. Um
him. Um >> and the piece was 6 seconds, >> right? And well the piece was a lot
>> right? And well the piece was a lot longer than six seconds, but you know there there were a lot of composers who were like you know who even questioned whether he was really really using code to compose.
>> I see. And uh he proved them all wrong by dropping a zip file on his website with 5,000 canatas in the style of Bach.
Um all in MIDI of course because nobody's going to perform all that stuff.
>> Um >> and I've probably listened to more of those canatas than anybody other than David Cop actually. Maybe also David Cop.
>> A bunch of them are pretty are pretty >> you have some favorites.
>> Yeah, they are they're pretty good actually. Um but but um nobody gives a
actually. Um but but um nobody gives a and and uh and I think that that's because um you know art is about our relationships with each other
>> uh as much as anything else.
>> You know it's not just the artifact.
>> It's not just the artifact. you know, a Bach is beautiful and special. And when
one day, you know, you open the door and you realize that, you know, you've you've had this beautiful shell, uh, you know, that you thought was unique and you open the door and it's a beach and
it's shells as far as the eye can see and and they're all beautiful.
>> I don't think that actually destroys your relationship with your shell. Uh,
and you know, this is all made out of relationships that we have with each other. So, that's part of my answer.
other. So, that's part of my answer.
Mhm.
>> Um but another part of it is that I think we we have uh some misapprehensions about about creativity too.
>> I see.
>> We uh we try to we try to cover our tracks. Uh we try to pretend that um
tracks. Uh we try to pretend that um that stuff came out of nowhere somehow.
Uh you know when um when uh when various James Joyce scholars figured out what had gone into Ulyses, he's like, "Okay, the next one I write, I'm going to foil you all and you're not going to be able to figure it out." That was Finnegan's Wake.
>> Yeah. Um,
>> hey, we've did pretty well. He
>> did pretty well as well. Um, because we love to nerd out on stuff, >> but you know, we're always remixing and combining the things that we've encountered. How else would we be able
encountered. How else would we be able to create? Um, this is why you get so
to create? Um, this is why you get so much simultaneous invention, simultaneous discovery. It's why cubism
simultaneous discovery. It's why cubism gets invented simultaneously by, you know, 18 different artists. It's why the light bulb was invented simultaneously by a dozen different inventors.
>> Um, >> because once those conditions are there, it's >> Yeah. Once you know how to blow glass,
>> Yeah. Once you know how to blow glass, you know how to draw a filament, you know how to make an electric current, and you need light, >> somebody's going to come up with a light bulb >> or 12 of them at once, >> or 12 at once, >> right?
>> Uh but they were also all different.
>> Uh you know, every every one of those combinations had things that were different about it, you know, about how it was blown, about whether it was long or round, about about whether it had
prongs or screws. and and what we make um you know the the the contingency of of a symbioetenetic world is one that is shaped by all of those um all of those
decisions and which one sticks so I guess what I'm trying to say is there's something sort of deterministic in a way that you know things things are going to combine stuff is going to happen certain ideas are about to pop uh whether in one
person's head or in others but at the same time the particulars of exactly how it happens really matter in terms of right the culture for going forward.
>> Okay. So, two things just to make sure I follow. Um, one is is this that the the
follow. Um, one is is this that the the the d the kind of romantic and I mean with a capital R romantic here dichotomization of of determinism and creativity is actually
>> wrong.
>> It's actually wrong. Yeah. Right.
They're actually And the other one is that >> but it's also right >> because because details matter.
>> Because details matter. Okay. Fair
enough. Um but also that maybe this f the focus of creativity on the artifact itself like there's a lot of concern you know Hollywood had a strike over this recently that like the the role of
generative AI in making the artifact >> right like AI can make like Harold Cohen AI can make a painting AI can make a box and not AI can make a but creativity isn't the artifact >> no
>> it's not AI's ability to make the object isn't the key if I'm follow am I is this kind >> when when when we have connected um our econ economic survival with production in the rigid ways that we have under
capitalism.
>> We have already done something that is going to pose increasing amounts of problems for us no matter what sort of labor we do uh going forward.
>> You know, we're about to right we're we're in a world of increasing abundance for a variety of reasons that, you know, we've been discussing for the last you know, last hour and a half, >> right? But we also are are in a world in
>> right? But we also are are in a world in which the more you think about things in these zero sum, you know, exchange value sorts of ways, the more problems you're going to create, >> the more problems you're going to create. So, uh, you know, things like
create. So, uh, you know, things like the the um, and I, you know, I don't want to opine too much about, you know, the the Hollywood strikes and so on.
But, you know, some of that is based on on structural problems and the way that whole system is set up >> undoubtedly.
>> Uh, and and some of them uh, some of those problems are also romantic with the big R problems and how we conceive of things. Yeah.
of things. Yeah.
>> Um, you know, the longest running lawsuit uh of of all time, as far as I know, was the one against George Harrison for um for ripping off uh He's So Fine.
>> Yeah. Cuz GCE is just so original.
>> Yeah. Yeah. Yeah. Right.
>> Okay. Um we have we're going to be we're going to end here. Um Bla1 will we will have a we'll see you all in the lobby afterwards. There's plenty of other
afterwards. There's plenty of other questions, other things to discuss. Um
but before we do so, is there anything you'd like to leave the audience with um here and online um about how they should approach the book um any any call to
action or anything else you'd like to you'd like to um make out make the call now? Well, yeah. I mean, there there is
now? Well, yeah. I mean, there there is a call to action in this, which is um you know, I I I feel like we we need we always need to be careful when we when we tread the line between, you know,
scientific observation and you know, ideological commitments or you know, could you know, that is to say, you know, how things are versus how things should should be.
>> And the problem is that our our ideas about how things are often color the way we think about those shoulds. um
Darwinian thinking resulted in a lot of policies and approaches to things uh that um that were quite destructive uh partly because they were based on just you know wrong assumptions about how
stuff is. So, you know, most of the work
stuff is. So, you know, most of the work that I've been talking about is hopefully shedding some new light on certain aspects of how stuff is that don't necessarily, you know, invalidate all the things that we learned. You
know, like Darwinian evolution does take place. It is real. But at the same time,
place. It is real. But at the same time, this shows you that there that we've only been looking at half the story. You
know, there's this whole other half, >> right? And um and understanding some of
>> right? And um and understanding some of those Rs or some of those things about how things are, I think should also change some of our thinking about they
should uh hopefully alleviate some of some of our um of our misfounded anxieties, but also change some of our ideas about about the shoulds. And I
would invite people to think about those shoulds in light of what we are starting to learn >> of what a instead of a social Darwinism.
Yeah. Yeah. Society predated social daris a society predicate in social symbioenesis.
>> Yes.
>> And what that would be the question.
Okay. That's a great place to leave that. Okay. Great. Okay. Well, we'll see
that. Okay. Great. Okay. Well, we'll see you all in the lobby. Thank you so much for to the Long Now Foundation. And
thank you for Yeah. Yeah. Yeah. As
always.
[Music]
Loading video analysis...