LongCut logo

The arrival of AGI | Shane Legg (co-founder of DeepMind)

By Google DeepMind

Summary

## Key takeaways - **Human intelligence not upper limit**: Is human intelligence going to be the upper limit of what's possible? I think absolutely not. Compared to data centers, human brains have six to eight orders of magnitude less in energy, space, bandwidth, and signal speed. [00:00], [35:12] - **Current AIs surpass humans unevenly**: It's already much better than people at speaking languages—it'll speak 150 languages or something. Nobody can do that. On the other hand they still fail to do things that we would expect people typically be able to do like continual learning. [03:00], [03:25] - **Minimal AGI in ~2 years**: Minimal AGI is an artificial agent that can at least do all the sorts of cognitive things that we would typically expect people to be able to do. We're not there yet, but it could be one year, it could be 5 years. I'm guessing probably about two or so. [01:38], [06:45] - **AGI levels: minimal to superintelligence**: Minimal AGI is the point at which this AI is no longer failing in ways that we would find surprising if we gave a person that cognitive task. Full AGI achieves the full spectrum of what's possible with human cognition. Beyond that is artificial super intelligence far beyond what humans are capable of. [07:11], [08:17] - **System 2 safety for ethical AGI**: Chain of thought monitoring or system two safety means the AI reasons step-by-step about ethical situations, analyzing complexities, actions, consequences with respect to ethics. It can become more ethical than people because it consistently applies reasoning at a superhuman level. [20:39], [23:08] - **Post-AGI massive societal transformation**: It means a massive transformation. This is actually something which is going to structurally change the economy and society and all kinds of things. We need to think about how do we structure this new world. [00:00], [37:26]

Topics Covered

  • Current AIs Surpass Humans Unevenly
  • AGI Levels: Minimal to Superintelligence
  • Test AGI via Human Benchmarks
  • Human Intelligence Not Upper Limit
  • Post-AGI Economy Needs Redesign

Full Transcript

So, is human intelligence going to be the upper limit of what's possible? >> I think absolutely not. >> I do wonder what all of this means for people. I mean, if we are getting to a point where essentially I mean human intelligence is dwarfed by super intelligence. What does that mean for society? >> It means a massive transformation. This is actually something which is going to structurally change the economy and society and all kinds of things. And we need to think about how do we structure

this new world.

Welcome to Google Deep Mind the podcast with me, your host, Professor Hannah Fry. AGI is coming. That's what everyone seems to be saying. Well, today my guest on the podcast is Shane Le, chief ADI scientist and co-founder of Google Deep Mind. Shane has been talking about AGI for decades, even back when it was considered, in his words, the lunatic fringe. He is credited with popularizing the term and making some of the earliest attempts to work out what it might

actually be. Now, in the conversation today, we're going to talk to him about how AGI should be defined, how we might recognize it when it arrives, how to make sure that it is safe and ethical, and then crucially, what the world looks like once we get there. And I have to tell you, Shane was remarkably candid about the ways that the whole of society will be impacted over the coming decade. It's definitely worth staying with us for that discussion. Welcome to the

actually be. Now, in the conversation today, we're going to talk to him about how AGI should be defined, how we might recognize it when it arrives, how to make sure that it is safe and ethical, and then crucially, what the world looks like once we get there. And I have to tell you, Shane was remarkably candid about the ways that the whole of society will be impacted over the coming decade. It's definitely worth staying with us for that discussion. Welcome to the

podcast, Shane. uh we last spoke to you five years ago and then you were telling us your your sort of vision for what AGI might look like in terms of the AI citizens that we got now today. Do you think that they're showing little sparks of being AGI? >> Yeah, I think it's a lot more than sparks. >> More than sparks. >> Oh yeah. Yeah. So my my definition of AGI or sometimes call minimal AGI is it's an artificial agent that can at least do the kinds of cognitive things people can typically do.

podcast, Shane. uh we last spoke to you five years ago and then you were telling us your your sort of vision for what AGI might look like in terms of the AI citizens that we got now today. Do you think that they're showing little sparks of being AGI? >> Yeah, I think it's a lot more than sparks. >> More than sparks. >> Oh yeah. Yeah. So my my definition of AGI or sometimes call minimal AGI is it's an artificial agent that can at least do the kinds of cognitive things people can typically do.

>> Yeah. And I like that bar because if it's less than that, it feels like well, it's failing to do things we'd cognitive things that we'd expect people to be able to do. So, it feels like we're not really there yet. >> On the other hand, if I set the minimal bar much higher than that, I'm setting it at a level where many people, a lot of people wouldn't actually be able to do some of the things we're requiring of the AGI. So, you know, we we believe people have some sort of, I don't know,

general intelligence, you might call it. So I feels like if it if an AI can do the kinds of cognitive things people can typically do at least possibly more then we should sort of consider it within that kind of a class. >> The stuff that we have now where is it on those levels >> right? Um so it's uneven so it's already much much better than people that say speaking languages. So it'll speak 150 languages or something. Nobody can do that. Uh and it general knowledge is

phenomenal. I can ask it about uh you know the suburb I grew up in a small town in New Zealand and it happens to know things about it right um on the other hand they still fail to do things that we would expect people typically be able to do uh they're not very good at continual learning learning new sort of skills over an extended period of time and that's incredibly important for example if you're taking on a new job you know you're not expected to know everything to be performant in the job

phenomenal. I can ask it about uh you know the suburb I grew up in a small town in New Zealand and it happens to know things about it right um on the other hand they still fail to do things that we would expect people typically be able to do uh they're not very good at continual learning learning new sort of skills over an extended period of time and that's incredibly important for example if you're taking on a new job you know you're not expected to know everything to be performant in the job

when you arrive but you have to learn over time to do it there also have some weaknesses in reasoning ing um particularly things like visual reasoning. >> So the AI are very good at say recognizing objects. They can recognize cats and dogs and all these sort of things. They've done that for a while. Um but if you ask them to reason about things within a within a scene, they get a lot more shaky. So you might say, well, you know, you can see you can see a red car and a blue car and you ask

them which car is bigger. Um people understand that there's perspective involved and maybe the blue car is bigger, but it looks smaller cuz it's further away, right? Uh AIS are not so good at that. Or if you have some sort of diagram with nodes and eges between them >> like a network >> a network yeah or a graph as a mathematician would say um and you ask questions about that and has to count the number of um you know edges spokes spokes that are coming out of you know

one of the nodes on the graph. >> Um a person does that by paying attention to different points and then actually mentally maybe counting them or what have you. Um the AI's not very good at doing that type of thing. So there are all sorts of things like this uh that we currently see. Uh I don't think there are fundamental blockers on any of these things and we have ideas on how to develop systems that can do these things and we see metrics improving over time in a bunch of these areas. So my

expectation is over a number of years these things will all get addressed but they're not there yet and I think it's going to take a little bit of time to go through that cuz it's quite a long tale of all sorts of cognitive things that people can do uh where the AIs are still below below human performance as we reach that and I think that's coming in a few years unclear exactly um the AIS will be a lot more reliable and that will increase their value quite

a lot in many ways but they'll they will also during that period become increasingly capable like to um professional level and beyond and maybe in coding mathematics already in you know known mult languages general knowledge of the world and stuff like this. So it's kind of a it's an uneven thing >> if you think that they will become more reliable over time like how is it just a question of making the models bigger doing things at larger scale is it more data I mean do you have a clear path to

make them more reliable uh I think we do and it's not one particular thing it's just not bigger models or more data um in some cases it's more data of a particular mind and then when you collect data that requires that say visual reasoning then the models learn how to do it. In some cases it requires algorithmic things like new processes within. So for example, if you want to do continual learning, so the AI keeps learning over time, you might need some process whereby new information is maybe

stored in something, some sort of retrieval system, an episodic memory if you like. And then you might have systems whereby that information over time is trained back into some underlying model. So that requires more than just more data. It requires some sort of algorithmic and architectural changes. So I think the answer is a combination of these things and it depends on what the particular issue is. >> I know that you don't think the AGI should be this this single yes no like a

threshold that you cross but but but more of a sort of spectrum as it were that you have these levels. Just just talk me through that. >> Yeah. So I have um what I call minimal AGI and that's when you have an artificial agent that it can at least do all the sorts of cognitive things that we would typically expect people to be able to do. And um we're not there yet, but it could be one year, it could be 5 years. I'm guessing probably about two or so. >> So that's the lowest level then.

>> That's the my what I call minimal AGI. That's the point at which I'd say okay this AI is no longer failing in ways that we would find surprising if we gave a person that cognitive task. And I think that's the that's the minimum bar. Now that doesn't mean we understand fully how to reach the capabilities of human intelligence because you can have extraordinary people who who go and do amazing you know cognitive feats inventing new theories in physics or maths or developing you know incredible

symphonies or doing all writing amazing literature and so on. Um, and just because our AI can do what's typical of human cognition doesn't necessarily mean we know all the recipes and algorithms everything required to achieve um very extraordinary feats of human cognition. Um once we can with our AI achieve the full spectrum of what's possible with human cognition uh then we really know that we've nailed you know at least fully to human level. And so we call that full AGI.

>> And then is there a level beyond that? >> Um yeah. So I think once you start going beyond what is possible with human cognition, you start heading into something that's called um artificial super intelligence or ASI. Um there aren't really good clear definitions of that. Um I've actually tried on a number of occasions to come up with a good definition of that. Every definition I've ever come up with has some sort of significant problems. But at least in vague terms, it means something like

it's it's an AGI. So it has the generality of an AGI, but it's now so capable in general, it's somehow far beyond what uh you know what humans are capable of of reaching. >> Cuz I know that you were one of the people who helped to coin that phrase AI. Do you think that it's still useful as a phrase? I mean, there's so many competing definitions now. It's sort of like the buzz word that everyone's using. And you're right that it's sort of it the way that it's described is

almost like a yes no like a kind of discrete line that gets crossed rather than this this this continuum almost of levels as you're describing. >> Yeah. So when I proposed the term, I was thinking of it more as a field of study >> because uh I was uh talking to a guy Ben Girtzol who I'd worked for um a year or so before and he wanted to write a book on sort of the old vision of AI this thinking machines these machines that can do lots and lots of different things

rather than it's just specialized. It just plays poker. It just does text to speech. It just does you know very very specific things which were sort of typical at the time. I was like, what about the old dream of AI? Building a system that has a very general capability and it can learn and reason and do language and write poetry or do maths or maybe paint a picture or, you know, all sorts of different things. What do we call that? And uh I said to him, well, if it's really about the

generality we want, why don't we just put the word general in in in the name and call artificial general intelligence. AGI kind of rolls off the tongue. >> Um maybe we do that. But then what happened is that um a number of people started using the term online and then very quickly people started talking about well when will we have AGI and so then AGI moved from being a sort of field of study or a sub field to a category of artifacts right and then it needs a definition. So perhaps it was a

mistake that I should have gone in and defined it. Um you know there now it turned out a few years later we found there was a guy Marco Brud who had actually written a paper in 97 uh we had used the term but it was in a nanotech security conference and none of us knew about this. Um but the way he defined it was actually in reference to the sorts of cognitive things people do in industry and other places like that. So it's quite similar flavor to even what I'm I'm using now. Yeah. If it had

been fixed more clearly early on, that would that that would be helpful. >> Do you regret coin? >> No, no, no. Because I think it it gave a way for people to refer to this idea of building AIs that were actually general. >> Mhm. >> Um or at least as general general to the extent that people's, you know, intelligence is general. There was a need for that, I think, and that's why I think the term caught on because there was sort of, you know, how do you refer

to that if you're not referring to this? If people use phrases like advanced AI, well alpha fold is an advanced AI in some sense, right? Uh and it's very impactful, but it's very very narrow, right? Or alpha go again is very narrow and it's some sort of advanced AI system. So how do you refer to systems that are very general? But then what's happened is that different people saw the term and took on they they they adapted in different ways or they looked at it through different lenses. So for

some people uh back even in the early days when they thought of AGI they thought of something in the future decades away and that this would be very transformative and so they started thinking about AGI in terms of the transformation it would create in society >> and so then they started if they try to define it they tend to think about oh it's because it can lead to I don't know economic growth or it's going to do all these sorts of things right >> some I tend to think of it as a more of

a historical point in time it's the point in time at which we sort of have to say, well, these AIs in some sense belong in a similar category to our intelligence and that they can do cognitive things that we typically can do. Um, now that doesn't necessarily revolutionize the world. The typical person walking around isn't going to be a Mozart or an Einstein and invent the successor to quantum theory or whatever, right? Um, but it's a very interesting point in time because 10 years ago, 20,

you know, whatever, we did not have AIs that were anywhere close to being able to do the cognitive things that people can typically do. So, I think this is an important sort of historical moment in that AIS are somehow in a similar category to us. >> I also think and I think it's useful to try to define it a bit because one of the issues that come up people have these different timelines, right? >> Some people say, "Oh, AR, I think it's going to be here in 3 years. Oh, I think

it's going to be 15 years away or 20 years or whatever. Um, and often when I go and talk to them about that, I find that they're using a different definition. And so that just leads to a lot of confusion because people use the term to mean different things. And in some cases, I actually agree with what they think is going to happen. They're just using the word in a different way. And that just creates quite a lot of confusion. >> I just want to compare some of the other

definitions that people are using for for AGI. So um some people have suggested that it's like there's a checklist of tasks or maybe there's uh humanities last exam which is this this sort of language model benchmark of two and a half thousand questions across different subjects. So humanities and natural sciences. >> Um there's other people that have said oh you uh you it needs to be able to perform in a kitchen. This is sort of trained as a chef and be able to be

dropped into a different kitchen and perform or or there's even one which is um could it be able to make a million dollars from $100,000? >> What do you what's your take on those definitions? >> Well, each one I have a take on. Go >> ahead. >> Um I mean make was a million dollars from $1,000 or something like that. Um I mean that that's obviously a very economic kind of perspective on it. Um I think a lot of people would struggle to do that. Um it's a very I think in

some ways quite narrow perspective on on this. I mean, maybe you could have, I don't know, a trading algorithm that trades uh trades on the markets that could do that, but that's all it can do is that's not what I'm talking about. So, I think it's the G that's the G in AGI. It's the generality that I find interesting and I I think that's one of the incredible things of the human mind is our flexibility and generality to do many, many different things. If you have

a particular set of tasks, well, okay, maybe you can build a system that can do those tasks, but maybe it's still failing to do basic cognitive things that we'd expect almost anybody to be able to do. I think that's unsatisfying. It's like, oh, our AI just failed again because it doesn't understand that really simple thing that I would expect pretty much anybody to understand. So the way I would operationalize my definition is I would have a suite of tasks where I know what typical

performance is >> from humans >> from humans and I would see whether the AI can do all those tasks. Now if it fails at any of those tasks it fails to meet my definition >> because it's not general enough. >> Yeah. It's failing to do some cognitive thing that we'd expect people to be able to do. If it passes that then I would propose we then go into a second phase which is more adversarial and we say okay it passed the battery of tests so it's not failing at anything in our

standard collection of however many thousands of tests or whatever we have now let's do an adversarial test get a team of people give them I don't know a month or two or whatever they're allowed to look inside the AI they're allowed to do whatever they like their job is find something that we believe people can typically do and it's cognitive where the AI fails at. If they can find it, it fails by definition. If they can't after a few months of probing it and testing it and and scratching the

heads and trying to find it, I think for intensive purposes, most practical purposes, we're there because this failure cases now so hard to find. Even teams of people after an extended period of time can't even find these failure cases. Do you think that we'll ever agree on a definition of what of what intelligence is or or what AGI is? Indeed. >> Um, in terms of AGI itself, my guess is that uh some years from now, the AIS will become so generally capable in so many different ways, people will

just talk about them as being AGI and AI will just happen to mean those things. And maybe people will be less worried about they will have less arguments about whether this is an AGI or not. People will say, "Oh, I've got the latest Gemini 9 or whatever it is." And it is really good. It, you know, it can it can write poetry. You can teach it a card game and it can play with you that you you just made up. It can do math. It can translate things. It can >> plan a holiday with you or whatever,

right? It's really really generally capable and it'll just seem obvious to people that it has some sort of generality of intelligence. But then for now, I mean, in terms of having before we get there, having this kind of defined path on the route to AGI, um, I mean, you talk about the the the risks of not having one that it could like acquire a certain piece of knowledge before another, for instance, I don't know, like being good at chemical engineering before it gets really good

right? It's really really generally capable and it'll just seem obvious to people that it has some sort of generality of intelligence. But then for now, I mean, in terms of having before we get there, having this kind of defined path on the route to AGI, um, I mean, you talk about the the the risks of not having one that it could like acquire a certain piece of knowledge before another, for instance, I don't know, like being good at chemical engineering before it gets really good

at ethics. I mean, how important is it to have this work now in advance of of getting there? So work around understanding its capabilities in different dimensions. >> Uh I think it's very important um because we have to think about how do how do we being society navigate uh the arrival of powerful capable machine intelligence and you can't just put it on a single dimension. It may be superhumanly capable at some things. It may be very fragile and weak in some other areas.

at ethics. I mean, how important is it to have this work now in advance of of getting there? So work around understanding its capabilities in different dimensions. >> Uh I think it's very important um because we have to think about how do how do we being society navigate uh the arrival of powerful capable machine intelligence and you can't just put it on a single dimension. It may be superhumanly capable at some things. It may be very fragile and weak in some other areas.

And if you don't understand what that distribution looks like, you're going to not understand the opportunities that exist. You're also not going to understand the risks or the ways in which it could be misapplied because you know, oh, it's super capable over here, but you need to understand that it's very, very weak over here and so certain things can go wrong. So I think it's just an important part of society navigating and understanding what the current situation is. So you know I think a lot

of the dialogue around AI already tends to talk about as being so so capable or sort of being not really that capable and it's overhyped or whatever. I think the reality is much more complicated. It is incredibly capable in some ways and it is quite fragile in others. >> You have to take the whole picture essentially. >> You got to take the whole picture. Yeah. And it's like, you know, human intelligence as well. You know, some people are really, really good. They

speak a whole bunch of languages. Some people are really good at math. Some people are really good at music, but maybe they're not so good at something else. >> So, okay, if we've got we've sort of got performance and generality. The other sort of arm of this that I want to talk to you about is is ethics. How does that fit into all of this? >> There are many aspects to ethics and and AI. Um, one aspect is simply does the AI itself have a good understanding of what

ethical behavior is and is it able to analyze uh possible things it can do in terms of this ethical behavior and do that robustly in a way that we can trust. >> So the AI itself can reason about the ethics of what it's doing. >> Yes. >> How does that work then? How do you embed that within it? I have a few thoughts on that but there's not a solved problem but it's I think it's a very very important problem. I like something which some people call chain of thought monitoring.

>> Uh I've talked about this uh I've given some short talks on it and so on. I call it system two safety and >> this is the Daniel Connean system one system two thinking. >> Exactly. And so the basic idea is something like this. say as a person if you're faced with a difficult ethical situation um it's often not sufficient just to go with your gut instinct right you actually need to sit down and think about okay this is the situation these are the various complexities nuances

these are the possible actions that could be taken these are the likely consequences of taking different actions and then analyze all of that with respect to some system of ethics and norms and morals and what have you that you have and maybe you have to reason about that quite a bit to really understand how all this fits together and then use that understanding to decide what what should be done. So let's say that the way that the human brain works in this situation I mean

this is the conne stuff right is that uh you know someone annoys you say you have a rush of anger you want to react that's your system one sort of quick thinking instinctive >> but you take a breath you think it through consider the consequences that's your system two thinking and then you might choose a different a different path >> yes so you might say for example I don't know lying is bad right so we're not going to lie but you could be in a particular situation where I don't know

you, you know, there's some bad people coming to get somebody and if you tell a lie, you can save their life and then the ethical thing to do is maybe to lie, right? >> And so the the the the simple rule is not always adequate to really make the right decision. So sometimes you need a little bit of logic and reasoning to really think through well in this case it's a it is actually the ethical thing to do is to tell a lie and maybe save someone's life or what have you, right?

But it gets very complicated and you have you know you probably heard of all these trolley problems and all these sorts of things right where our instincts and the analysis in some cases start diverging and causes a lot of confusion right so these are this is not simple territory at all and we have AIs now that do this these thinking AIs right and so you can actually see the chain of thought that the the AIS use and so when you give an AI some question has a moral aspect to it, some ethical

aspect, you can actually see it go away and reason about the situation. And if we can make that reasoning really really tight and has a really strong understanding of uh some ethics and morals that we want it to adhere to, I think it should in principle actually be be a become more ethical than people >> because it can more consistently apply and reason at maybe a superhuman level um the decision, you know, the choices that it's faced with and so on >> because that switches ethics into a

reasoning problem as it were, rather than just a sort of a a feeling thing. >> Yeah. >> But then at the same time, I do wonder when you're saying that I do wonder a bit about grounding. I mean, these things certainly for now are like not living in the world as humans. >> Is it possible to sort of take what it feels like to experience the world from a human perspective and truly ground these machines in in in in sort of human ethics? >> Um, well, there's a few complexities.

One complexity there is that there is not one human ethics. >> Agree. >> Um and there are different uh ideas about this uh that vary between people but also between cultures and regions and so on. >> So it'll have to understand that in certain places the norms are and expectations are a bit different. >> Um and to some extent the models do know quite a lot of this actually because they absorb data from all around the world. Um but yeah, it will need to be uh really good at that in terms of

grounding in reality. Um at the moment we're building these agents by collecting lots of data from the world, training them into these big models and then they become relatively static objects that we then interact with and they don't really learn much new or anything like that. um that's shifting and we're bringing in uh more learning algorithms and all that kind of thing, but we're also making the systems more agentic. So they're not just a a system that you talk to and

then it processes and gives a response, but there may be a system that can go and do something. So you can say to it, okay, I want you to write some software that does such and such. Oh, I want you to go and um I don't know, come up with a plan for my trip to Mexico and I want to see this and this, but I don't like this or whatever. And then those agents will also start to become more embodied in robotics and things like that. Some of them will be software agents. They'll

do those sorts of things. Um but they'll with time I think they'll become more they'll turn up in robots and all that kind of thing. And as you keep going along this this track, the AIS become more connected to reality through all sorts of different things. And they actually have to learn through interaction and experience rather than just sort of a large data set that sort of goes in at the beginning. >> That's where the connection to reality tightens up a lot. That said, you know,

a lot of this data that was poured into them at the beginning that came from somewhere and a lot of it came from people. Mhm. >> So there is a grounding to reality that that comes via that process as well. >> This idea of the AI being better at ethics than than humans themselves. How do you until you get there until like the reasoning is as good as ours? How do you make sure that it's implemented in a safe way? I mean, >> yeah, it's a big >> stop. I don't know like so for example,

you know, a utilitarian argument, right, that that works quite well for for driverless cars on the roads is like you want to save as many lives as possible. But then in medicine, that same idea, right, it it doesn't work anymore. You can't sacrifice one healthy patient to save the lives of five others. How do you make sure that it ends up reasoning in the correct direction? >> Uh you can't guarantee everything. The space of possibilities of action in the world is so huge that 100% reliability

is not a thing >> but it's not a thing in a lot of the world as it exists. If you need a surgery and you go and talk to the surgeon and you say, "Well, you know, I'm going to get something removed or whatever." >> And the surgeon says to you, "It's 100% safe." As a mathematician, you know that they're not telling you the truth, >> right? Nothing is ever 100%. >> Um, so what we have to do is we have to test these systems um and make them as safe and reliable as

possible. And we have to trade off the benefits and the risks. And we also have to, you know, we have to do other things like monitor them. So when they're in deployment, we we monitor them, keep track of what's going on. So if we start seeing that, you know, there are failure cases that are beyond what we consider acceptable, we may have to roll back and stop them or do whatever, right? So there's a whole range of different things we need to do. We need to we need

possible. And we have to trade off the benefits and the risks. And we also have to, you know, we have to do other things like monitor them. So when they're in deployment, we we monitor them, keep track of what's going on. So if we start seeing that, you know, there are failure cases that are beyond what we consider acceptable, we may have to roll back and stop them or do whatever, right? So there's a whole range of different things we need to do. We need to we need

to do testing before it goes out. We need to monitor it when it when they are out there doing things. We need to do things like interpretability. we're able to look inside the system. That's one nice thing about system two. If it's safety, if it's implemented the right way, you can actually see it reasoning about things. But you got to check that this reasoning is actually an accurate reflection of what it's really trying to do. But, you know, if you have ways to

look inside the system and really see why they're doing things, that can maybe give you another level of reassurance as to that they are sort of, you know, trying to act in the right way >> because that's another important, you know, subtlety. It's not always just about the outcome but maybe the intention right so there's a big difference between I know somebody hurting you intentionally and somebody I don't know accidentally bumping you and it hurts or something right and we

interpret that very very differently so if we can see inside our AIS we might accept that well you know it was dealing with a tricky situation it tried to do the best thing it could according to it analysis but there was some negative side effect we might be sort of okay with that because maybe even as people in that tricky situation it would be very difficult for us to do the right thing. But if it did the wrong thing intentionally, that's a whole different

thing. So there these are all aspects of uh AI AGI safety and we have people working on all of these all these topics. >> So then do you sort of limit the the amount that these things can interact with the real world, how quickly you release them and so on and so on until you feel confident that they're they're at the safety threshold. >> Yeah. So we have all kinds of testing benchmarks and tests and we we we run them you know internally for a while and we we have particular things that we

thing. So there these are all aspects of uh AI AGI safety and we have people working on all of these all these topics. >> So then do you sort of limit the the amount that these things can interact with the real world, how quickly you release them and so on and so on until you feel confident that they're they're at the safety threshold. >> Yeah. So we have all kinds of testing benchmarks and tests and we we we run them you know internally for a while and we we have particular things that we

test for that are that risky areas >> um >> like what >> we try to see if the system will help develop I don't know like a boweapon or something like that >> right >> and obviously it should not >> yes >> and so if we start seeing that it it it we can somehow trick it or force it into being helpful in that area that's a problem >> right >> hacking is another one. Will it help people you know hack things and so on so so yeah we have uh at the moment a collection of these tests and these

collection keeps growing over time and then we assess how powerful it is in some of these areas and then we have mitigations appropriate to each level of capability that we see. It could mean that we don't release the model. It could mean that various different things depending on what we find. Yeah. Well, let's talk about the impact on society of some of this stuff like once we get to really capable AGI and I know that this is something that you have thought

an awful lot about. Is that fair to say? >> Yeah. My main focus now is trying to understand what if we get AGI and it's reasonably safe for its level of capability. What about everything else? And the list of everything else is is enormous. M >> there are questions like um so okay we've got powerful AGI and it's reasonably safe is it conscious do we is that even a meaningful question >> do you have a stance on that >> even >> uh well we've got a group looking at that

and we've talked to a lot of uh leading experts in the world who study this and I think the short answer is nobody really knows >> to be absolutely absolutely clear. We're talking about full AGI here rather than the stuff we have at the moment. >> Yes. >> Are you comfortable the stuff at the moment is not? >> I don't think it is. >> Um as we go into some future AGI years in the no 10 years in the future or something uh which is very very capable will that system be conscious when I

talk to some of the most famous experts in the world that study this. There are various people who have arguments for there are various people who are arguments against. But when I actually put a concrete scenario to them and I say, "Look, we've got Gemini 10 here and it's embodied in a, you know, humanoid robot and it it learns and it integrates information across sensors and it can remember its own history as an agent in the world and and do all these sorts of things."

>> Uh, and also talks about its own consciousness because you can actually get AI models to talk about that consciousness now if you, you know, you prompt them in the right kind of way. >> Is it conscious? And when I put that to uh people in the field, they're like, well, I think probably not or I think probably yes, but actually I'm not ab absolutely sure. And who knows, maybe we will have an answer to that. I think it's a long-standing question and it's a very

difficult question to even make into a strict scientific question because we don't know how to frame this as a measurable thing. What I am sure is going to happen is that some people will think they are conscious and some people will think they are not. That is certainly going to happen. Um particularly in the absence of a really well-accepted scientific definition and way of measuring it. And then how are we going to navigate that? That's a very interesting question as well. But this

is just one question of you know we have things like um are we going to go from AGI say full AGI? Are we going to go towards super intelligence that's far far beyond human intelligence? Um, is it going to happen quickly, slowly, never? And if it does go to super intelligence, what is that super intelligence? What's the what's the cognitive profile of that super intelligence? Are there certain things where it's going to be far far beyond human? We already see it can

speak 200 languages or something that that's clear. And are there other things where maybe because of the computational complexity or whatever is not actually going to be much better than humans, right? >> Um do we have any idea of that? That seems like a really important question for you know humanity to be thinking about. Are we going to go into super intelligence in a decade or two decades or something like that? >> Do you have a stance on that? Do you think it will go to super intelligence?

>> Um >> I mean I'm sort of thinking here about like um you know Einstein for example came up with general relativity. Will we be in a position where you have AGI that can theorize about the world come up with genuine scientific understanding that goes beyond what humans have managed? >> Uh I think it will based on computation and the human brain is a a mobile processor. It weighs a few pounds. It consumes I think around 20 watts. Um signals are sent within the brain uh through dendrites.

Um the frequency on the channel is about order of 100 hertz or maybe 200 htz in in the cortex. Um and the signals themselves are electrochemical wave propagations. They move at about 30 m/s. Okay. So if you compare that to what we see in a data center instead of 20 watts you could have 200 megaww instead of a few pounds you could have several million pounds. Instead of 100 hertz on the channel you can have 10 billion hertz on the channel. Right? And instead of uh electrochemical wave propagation

at 30 meters/s, you can be at the speed of light 300,000 kilometers/s. Right? So in terms of energy consumption, space, bandwidth on the channel, speed of signal propagation, you've got six, seven, maybe eight orders of magnitude in all four dimensions simultaneously. Right? So is human intelligence going to be the upper limit of what's possible? I think absolutely not. And so I think we as our understanding of how to build intelligent systems develops, we're going to see these AIs go far beyond

human intelligence. Um in the same way that you know humans, you know, we can't outrun a top fuel dragster over 100 meters, right? We can't lift more than a crane, right? We can't see further than the Hubble telescope. I mean it's we already see machines in particular areas that can you know fly faster than the fastest bird and all these sorts of things right uh I think we'll see that in cognition as well we've already seen in some aspects you know you don't know more

human intelligence. Um in the same way that you know humans, you know, we can't outrun a top fuel dragster over 100 meters, right? We can't lift more than a crane, right? We can't see further than the Hubble telescope. I mean it's we already see machines in particular areas that can you know fly faster than the fastest bird and all these sorts of things right uh I think we'll see that in cognition as well we've already seen in some aspects you know you don't know more

than Google right um and so on on like information storage and stuff like that we already gone beyond what the human brain is capable of I think we're going to start seeing that in reasoning and all kinds of other domains so yes I think we are going to go towards super intelligence. So that's why I'm very interested in things like system two safety because if we can't stop the development towards super intelligence because of competitive dynamics globally and all these sorts of things um then we

need to think really hard about how do we make a super intelligence super ethical and if you have a system that can apply the capabilities of it intelligence not just to achieving goals and doing things but actually applying it to to making ethical decisions as well then it might scale with its capabilities in some way. >> I do wonder what all of this means for people. I mean if we are getting to a point where essentially I mean human intelligence is dwarfed by super intelligence.

What does that mean for society? Does that mean just massive inequality that you have the people who no longer have value essentially in what they they can they can offer the economy um being completely left behind? It means a massive transformation. I think the current system where people contribute their um their mental and physical labor in return to access to resources that are generating the economy. uh that may not work the same anymore and we may need

different ways of doing things. Now the pie should get much bigger. So there's there's not a problem of a lack of goods and services that are produced. If anything that's getting much much better, but we need to think carefully about what is the what's the system for people? What is how do we distribute uh the wealth that exists in society? I think there needs to be a lot more thought going into this of how a post AGI economy works and how the structure of a post AI um society works as well. I

I gave a talk to the um Russell Group vice chancellor. So in the UK, the Russell Group is the top universities in the UK. And um I said to them, look, this AGI thing's coming and it's not that far away. >> You know, in 10 years, we're going to have it and it's going to start being able to do a significant fraction of all kinds of cognitive labor and work and things that people do, right? We actually need people in all these different aspects of society and how

society works to think about what that means in their particular area. So we really need every faculty and every department that you have in your university to take this seriously and think what does it mean for education, right? What does it mean for law? What does it mean for engineering, mathematics? um city planning, uh literature, politics economics finance medicine, dot dot dot dot dot dot, right? So basically every faculty, every department studies something where human

intelligence is a really important thing. And so if you have the presence of cheap, abundant, capable machine intelligence turning up, that thing needs to be thought about again. What is the implications of this? should it be done in a different way? What are the opportunities? What are the risks and so on? So, I think there's an enormous opportunity here. But just like, you know, any revolution like the industrial revolution or anything, um, it's complicated. It has all kinds of effects

on society in all kinds of ways. And to get the benefits of that and and minimize the risks and the costs of that, we need to navigate this carefully. And at the moment I think nowhere near enough people are thinking about what AGI means for this particular thing and we need a lot more people doing that. >> Do do you remember in March 2020 when the experts were saying there's this pandemic coming? It's really it's it's we're really standing on the on the edge of an exponential curve

>> and then everyone was still sort of in pubs and you know going to football games and things and the experts were increasingly shouting about what was coming. >> Do you sort of feel a little bit like that? >> I remember those days well. Um it does feel a bit like that. People find it very hard to believe that a really big change is coming because most of the time the story that something really huge is about to happen. It's not always the physical out to nothing right?

>> And so as a kind of a huristic, if somebody tells you some crazy crazy big things are going to happen as a heristic probably you can ignore most of those. But you do have to pay attention. Sometimes there are fundamentals that are driving these things and if you understand the fundamentals you need to take seriously the idea that a big change does come and you know sometimes big changes do come. >> What does this mean though? Because I mean, okay, you describe a sort of a a

long-term vision where you have full AGI and there's like prosperity that can, you know, potentially be shared and so on, but but getting there, I mean, we're talking about some really big >> I mean, that's an understatement, massive economic disruption, structural risks here. Just talk us through what you expect the next few years to look like. I mean, tell us what we didn't know in March 2020. I think what we'll see in the next few years is not those big disruptions

you're talking about. I think we'll see in the next few years is AI systems going from being very useful tools to actually taking on more of a uh load in terms of doing really economically valuable uh work and I think it'll be quite uneven. It'll happen in certain domains fast than others. So for example in software engineering I think in the next few years the fraction of software being written by AI is going to go up and so in a few years where prior you needed a 100 software engineers maybe

you need 20 and those 20 use advanced AI tools over a few years we'll see AI going from kind of just a sort of a a useful tool to being to doing really meaningful productive work >> and increasing the productivity of people that work in those areas. It'll also create some disruption in uh in the labor market in certain areas. And then as that happens, um I think a lot of the discussion around AI is going to um shift and become a lot more serious. And so it's going to shift from being just

sort of like, oh, this is really cool. You can ask it to plan your holiday and help you with your, you know, children's, if they're stuck in something and they don't understand their homework or whatever, things like this. Um, through to something that's like, okay, this is not some nice new tool. This is actually something which is going to structurally change the economy and society and all kinds of things. And we need to think about how do we structure this new world because I do believe that

if we can harness this capability, this could be a real golden age cuz we now have machines that can dramatically increase production of many types of things, right? And advance science and and um relieve us of all kinds of labor that maybe we don't need to be doing if the machines can do it, right? So there's an opportunity here, but that is only good if we can somehow translate this incredible capability of machines into a vision of society where there is some flourishing of people as

individuals and as groups of people in society that benefit from all this capability. Because in the meantime, you have those 80 software engineers who are no longer needed and all of the other people the the entry level employees at the moment, you know, graduates who are sort of noticing that they're the first ones to to be affected by this. Are there any industries that are not going to be impacted by this? >> Uh in the short to medium term, I think there'll actually be quite a lot of

things. So plumbers often go, right? um e I think in the next in the coming years we're not going even if the AI does develop quite quickly then it's purely cognitive sense I don't think robotics will be at the point which could be a plumber and then even when that is possible um I think it's going to take quite a while before it's price competitive with a human plumber right and so I think there are all kinds of uh work which is not purely cognitive uh that will be relatively protected

things. So plumbers often go, right? um e I think in the next in the coming years we're not going even if the AI does develop quite quickly then it's purely cognitive sense I don't think robotics will be at the point which could be a plumber and then even when that is possible um I think it's going to take quite a while before it's price competitive with a human plumber right and so I think there are all kinds of uh work which is not purely cognitive uh that will be relatively protected

from some of this stuff. The interesting thing is that a lot of uh work which currently commands uh very high compensation is sort of elite cognitive work right so it's people doing I don't know >> um sort of highowered lawyers that are doing complex merger and acquisition deals across the globe and uh people doing advanced stuff in finance or now people doing you know advanced machine learning software engineering all these types of things. Um, >> mathematicians,

>> one rule of thumb that I quite like is >> if you can do the job remotely over the internet, just using a laptop, so you're not some full haptic body suit with some robot, you know, controlling whatever, just normal interface, keyboard, screen, camera, speaker, microphone, you know, mouse. If you can do your work completely that way, uh then it's probably very much cognitive work. So if you're in that category, uh I think that uh advanced AI uh will be able to operate in that base. um to

to to some extent. The the other thing that uh is I think protective is even if it is cog sort of cognitive work, there can be a human aspect to some types of um work and things that people do. So for example, let's say you are I don't know an influencer, right? and you work, you can do that work maybe remotely, but the fact that you're a particular person with a particular personality and people know there is a person behind, you know, what's going on there, that may be

valuable in many cases, right? >> That leaves a lot of people though, doesn't it? >> I think what we what we need is is sort of along the lines of what I suggested to the the Russell group is we we need people who study all these different aspects of society to take Agi seriously. And my impression is that a lot of these people are not. And when I go and talk to people who are interested in one of these particular things, like, oh yeah, it's kind of like, you know, it's it's

an interesting tool, it's kind of amusing whatever >> but they haven't internalized the idea that what they're seeing now and any current limitations that they currently know of, which by the way are often out of date. Often these people say, "Oh, I tried to do something with it a year ago." It's like a year ago is now ancient history compared to what the current models are doing and one year from now it's going to be a lot better. Um they're not seeing that trend in some

ways. I actually think many people in the general public are ahead of the experts because I think there's a human tendency you know if I talk to non- tech people about um current AI systems some of the people say to me oh well doesn't it already have like human intelligence? It speaks more languages than me. It can do math and physics problems better than I could ever do at high school. Uh, it knows more recipes than me. Uh, it can help me with all kinds of things. I was

ways. I actually think many people in the general public are ahead of the experts because I think there's a human tendency you know if I talk to non- tech people about um current AI systems some of the people say to me oh well doesn't it already have like human intelligence? It speaks more languages than me. It can do math and physics problems better than I could ever do at high school. Uh, it knows more recipes than me. Uh, it can help me with all kinds of things. I was

confused about my tax return and explain something to me or whatever. They're like, "So, in what way is it not intelligent?" You know, this is the sort of thing that I I get when I talk to a number of non- tech people. But often people who are experts in a particular domain, they really like to feel that their thing is very deep and special and this AI is not really going to touch them. >> I think I want to end with your now quite famous prediction about AGI and

you have stayed incredibly consistent on this um for over a decade. In fact, you have said that there is a 50/50 chance of AGI by 2028. >> Yes. >> Is that that's minimal AGI? >> Yes. >> Wow. And um >> are you still 50/50 by 2028? >> Yes, >> 2028. And you can see that on my blog from 2009. >> And what do you think about full AGI? What's your timeline for that? >> Uh there some years later. Could be three, four, five, six years later. Yeah. >> Within a decade.

>> Yeah, I think it'll be within a decade. Do you ever just feel a bit nihilistic with all of this knowledge? >> I think there is an enormous opportunity here. A lot of people do a lot put a lot of effort into doing a lot of work and not all of it is that much fun. And I think there's an incredible opportunity here to just like the industrial revolution en sort of took the harnessing of energy to do all sorts of mechanical work which created a lot more wealth in society. Now we can

harness data and algorithms and computation to do all kinds of more cognitive work as well. And so that can enable a huge amount of wealth to exist for people and wealth not just in terms of production of goods and services and so on but you know new technologies, new medicines and all kinds of things like this. So this is technology that has an incredible potential for benefit. Now the challenge is how do we get those benefits while dealing with the risks and potential

costs and so on. Can we imagine a future world where we're really benefiting from having intelligence really helping us to flourish and what does that look like? And that's you know I can't just answer that. I'm I'm very interested in that. I'm going to try and understand the best I can. But this is a really profound question. It touches on philosophy and economics and psychology and ethics and all kinds of questions, right? Um, and we need we need a lot more people

thinking about this and trying to imagine what that positive future looks like. >> Shane, thank you so much. That was mindexpanding to say the least. Humans are not very good at exponentials and right now at this moment we are standing right on the bend of the curve. AGI is not a distant thought experiment anymore. What I found so interesting about that conversation with Shane is that he thinks the general public understand this better than the experts. And if his timelines are anything like

correct, and he's had a habit of being right in the past, we might not have the luxury of time for slow reflection and realization here. We have got difficult, urgent, and potentially genuinely exciting questions that need some serious attention. Now, you have been listening to Google Deep Mind the podcast with me, your host Hannah Fry. If you enjoyed that conversation, please do subscribe to our podcast or leave us a review. Next episode we are going to be sitting down with Deep Mind

co-founder Deis Habis. So trust us when we tell you you don't want to miss that one.

Loading...

Loading video analysis...