LongCut logo

The politics and philosophy of AI | LSE Event

By LSE

Summary

Topics Covered

  • Part 1
  • Part 2
  • Part 3
  • Part 4
  • Part 5

Full Transcript

okay good evening everyone my name is Robin Archer and I'm the director of the Ralph milleran program here at the London School of economics and I'm really pleased to welcome you all tonight to our discussion about the

potentials and dangers of artificial intelligence we've got a a great panel of speakers and I'm particularly pleased to introduce them we've got Professor

Jeffrey Hinton and Dr Kate fredenberg Jeffrey Hinton is professor emeritus of political science at the University of Toronto and he's

also you're so used to saying political science you can't say computer science that is very true that is that is absolutely true well there you are that is a danger but it's not attributable to

artificial intelligence um so hallucination I I will start start see it's about the neural system

but it's um so he's professor of of computer science of course at the University of Toronto law I believe it is true that you have studied philosophy art history biology physics where did

you get the art history um from some from somewhere or another my wife was anory somewhere that counts thank you very

much I will proceed on with his introduction in addition he's the chief Scientific Advisor of The vecta Institute which he helped to establish which is also in

Toronto and he was also for some years um a leading person working at Google after they bought the company that he had helped set up um many of you

probably know already but but last last year he he left Google saying that he wanted to speak freely about the risks of artificial intelligence and I think it's fair to say that his decision

attracted you know really worldwide attention um in part because his own research was so widely seen as foundational to that whole project is

somehow underwriting the ability to produce this this set of Technologies and indeed in 2018 he and two of his colleagues won the So-Cal touring prize

which is a sort of a Nobel Prize for computer science um so um and that's just one of multiple prizes I mean we would never hear them speak if I listed

all his prizes but they are they are a great many of honors and prizes from different professional organizations universities and and and Nations dates well since he's left Google he's um he's

given a series of prestigious lectures and I'm I'm very happy that the Ralph milleran program is able to um have him for one here too Kate fredenberg is a professor in

the department of philosophy logic and scientific method a very important name for our department here at the London School of Economics she studied in Oxford and in Harvard before joining the

LSC she was a research fellow at Stanford um working in a on a project to do with the ethics of artificial intelligence and indeed I believe you

give a sort of a three we Master course in that very subject um here in in London her research concerns the relationship broadly between moral and

political philosophy and technology and amongst her papers one of her aims has been to understand how normative commitments influence modeling in

computer science also in social science more generally well there could hardly be a better time to hear from our speakers if you opened up your financial times this morning you will have seen

that there was a meeting in Beijing last week in which a series of leading experts um including one of our speakers here met Western and Chinese experts to

discuss uh what ought to be the So-Cal red lines in terms of future development of uh artificial intelligence concerned

as they were about potential dangers that might arise from them um and these um these scientists it seem to me although we can hear from one of them himself in a minute seem to me to have

been motivated in part by the perception that there's a sort of analogous situation which with that which applied after the second world war when nuclear

technology presented sort of very serious um dangers as well as opportunities well we're going to hear from each of our speakers in turn starting with Prof Professor Hinton and

then um depending on how we go for time I might give might be a couple of questions from the chair but we'll make sure to have plenty of time at least a good half hour for questions from from you uh from the floor but before I turn

to our first Speaker can I ask you to join me in welcoming our speakers Professor Jeffrey Hinton [Applause]

it work um forget the tle that just happened to be there what I'm going to talk about today

I'm going to try and give you some insight into what AIS are really like um because most people have no clue what they're like they're like us they're not like computer programs

so in the history of AI for a long long time people thought um that the meaning of a word can we get rid of this home yeah

there is a feedback for some reason um is is did the immedia guy go home okay so I'll keep going anyway um there was a idea that comes from doas

that the meaning of a word depends on its relation to other words and most people in symbolic a and people like Chomsky all thought that the internal representations were something like language in Psychology in the 1930s

there was a very different theory of the meaning which was that a meaning was just a big bunch of semantic and syntactic features and similar words had similar Bunches of features and those

two theories look to be utterly different but actually you can put the two together and they work very nicely together so the idea is you'll use a neural net you'll learn a set of

semantic and syntactic features for each word so that's got the feature aspect of it and then you'll learn how to predict the features of the next word from the features of the words in the context and

that involves learning interactions between features and you won't store any propositions there won't be anything stored except these how features should

interact so very unlike the symbolic school there's no idea of some stored language inside you're storing interact how features should interact so you can

predict features of the next word and whenever you want to produce the symbol string you generate it you don't retrieve it so you're not storing things like you do on a computer where you store things literally and retrieve them

you're generating these things and knowledge is then in the way that features interact and people often say if you train a model to predict the next word

but it's just doing autocomplete and it's not doing autocomplete in the sense most people understand autocomplete in the old days you do autocomplete by having a big table of phrases and if you saw fish and you'd look up in your big

table what comes after most frequently after and and it's probably chips um that's not how they work they work by building a model of the symbol strings

in terms of features and feature interactions and people often say they're very different from us but to know that you'd need to know how we

worked and actually the best model we have of how our brain works is is this model the original model a little language model was designed to try and

understand how the brain is dealing with language how it gives meanings to words and so actually I think both these systems and the brains work in roughly

the same way not in detail of course and what they're doing is they're building a model and the model uses features and feature interactions it's a very general

modeling technique and it's just utterly different from the idea storing propositions okay I'm going to just mention all the various I risks that once you produce something like this

there's all sorts of terrible risks and I'm going to focus on one of them um there's fake images voices and video which is going to be very important this year because people are going to corrupt elections with it there's the potential

for massive job losses in the past new technologies created new jobs as well as losing old jobs but this is very like what happened in the industrial revolution when

normal physical labor became not much of not much value if you're big and strong and could dig ditches it didn't help much when you had to compete with the backod and I think we've got the intellectual equivalent of that now and

so normal intellectual work is just going to get replaced by these things there's lethal autonomous weapons one thing to notice about that is if you look at all the regulations that governments come up with like the recent European regulations there's a little

clause in there saying none of these regulations apply to military uses of AI governments are very happy to regulate us they're not very happy to

regulate themselves um the cyber crime and deliberate pandemics and there if we open source these models which people at

meta like to do and musk is now doing if you open source a big model it makes it very easy to for people to fine tune it to do other things and I think open sourcing is terribly dangerous thing to

do I'm quite irresponsible and then there's discrimination and bias and being an old white male I'm not the person to talk about

that um but don't forget that um AI is going to be immensely useful which is why there's no hope of stopping the development it's going to be incredibly useful in medicine for example in North America 200,000 people a year are killed

by bad diagnosis if you take a doctor and give them some difficult cases they get about 40% right if you take an AI system it gets about 50% right and if you take the a system and the doctor together they

get about 60% right and the a systems getting better all the time time so that's a huge difference and that'll just get better what I'm going to talk about is a

longer term existential threat which I use existential to mean threatens our very existence um the several ways AIS could wipe wipe us out

um and the reason I went public last year was because there were a whole bunch of people saying this is just science fiction you don't need to worry about this it's science fiction they were typically

linguists who didn't understand how these things worked um but I went public to say it's not science fiction and we should think about it hard now before

it's too late and in particular I wanted to encourage young researchers to work on this issue I'm too old to solve it myself um

so super intelligences could obviously take over because Bad actors use them in bad ways I gave this talk in China and they insisted I and before before I gave

it I removed Z from the slide because I'm not I'm not stupid but they insist they insisted I removed Putin from the side which is very interesting

um you can make a super intelligence more effective by allowing it to create sub goals that's how intelligent things work if you want to get to the United States you have a sub goal of getting to the airport and you can focus on the sub

goal without worrying about the whole trip and it's going to be the same for super intelligences in fact it already is now there's a very obvious subo goal which is to get more control because if you get more control you can achieve

your other goals better um a superintelligent will find it very easy to get more controlled by manipulating people remember they'll have learned from us they'll have read

every book Mack Val ever wrote they'll have seen all the behavior of people like Trump they'll be very good at manipulating people and you'll notice Trump could invade the capital without ever going there it was just words he

used to invade the capital and they'll be able to do the same so the idea of a big switch to turn them off is crazy they just talk to the guy with a big switch and explain to him why it's stupid to turn them off I think the

right model to have here is imagine there was a kindergarten full of threey olds and they were in charge and you were meant to be helping

them well you might help them but if you ever wanted to be in charge you just say free candy for a week if you put me in charge and they put you in charge I mean it were very easy and the gap between

you and a three year old is probably much smaller than the gap between a super intelligence and us there's also being on the wrong side of evolution so if you get super intelligences to start competing with

each other then as soon as a super intelligence gets any desire to propagate itself or to preserve itself um we're in a lot of trouble

because a super intelligence will be better and smarter than the other ones if you can get more resources and already they're doing things like deciding who gets get to use the Nvidia

gpus in the data centers so um if they start competing with each other I think they'll become like a bunch of jumped up Apes

which is what we are and very aggressive and competitive very loyal to their own tribe but aggressive competitive with other tribes and that would be very bad for

us so that's a lot of what I want to say at the beginning of 2023 I had an epiphany which is was working on how to make analog computers that will be much lower power to run these large language

models and I suddenly came to realize that digital intelligence is actually much better than analog intelligence which is basically what we've got and the reason it's better is because if you

have many different copies of the same Digital model running on different hardware and each copy learns something about the world and they want to share what they all learned with each other

they can do it very easily all they have to do is share their weights that is average their weights and they'll be sharing their knowledge we can't do that because all our brains are wide slightly differently

an analog system can't do that because it'll use the funny analog properties of the system and so you have to be digital to get this property and it's because of

this property that the big language models know thousands of times more than any one person there were many copies of that model that looked at different bits of the internet that's how you got all the

data through there so my conclusion from that is that although digital computation requires lots of energy because you have to drive the transistors very hard um you can

share knowledge that allows gbt 4 to get thousands of times more knowledge in than a person in only a few percent as many connections so it's much more efficient and perhaps that's more

efficient because they use back propagation and maybe the brain doesn't biological computation is uses much less energy so obviously it was going to evolve first but it's hopeless at sharing knowledge I'm trying to share

knowledge now and you can see it doesn't work very well um I want to say one more thing a lot of people think but basically we're

okay because we have Consciousness and sentience and they're just machines well no they're they our brains a machine and they're a quite similar machine they're digital not analog so they're simulated

on a computer but if you ask what's being simulated that's a quite similar machine to our brain um so we know that people have a tendency to think they're special

especially Americans um so they used to think they were made by God and God put them at the center of the universe

um anybody still think that thank God for that um many people think we have something special which is subjective experience and computers

couldn't possibly have subjective experience um and because they don't have subjective experience exp erience we've got something over them they'll never feel like we do they'll never

experience things like we do well I think that's rubbish and I'm going to try and convince you that it's rubbish and I'm can actually use a philosophical argument

um yes there's a position that I call atheism notice it's atheism with a bit in the middle um and the position goes like this most people think the mind is

some kind of in a theater and you see what's going on in your own mind you experience that directly nobody else experiences that you know what's going on in your own mind um I think that view

is just utterly wrong I think that view is as wrong as a religious fundamentalist view of the material world which is just utterly wrong and people people have this view

they hold it very strongly it's what they think sentience and Consciousness and all that stuff is and they're very attached to it and they don't like being told it's wrong um so here's an

alternative view I would like to be able to tell you what my perceptual system is telling me if my perceptual system is telling me what's going on in the world that's fairly easy but when my perceptual

system makes a mistake I'd like to be able to communicate to you that my perceptual system is telling me this thing that I don't actually believe but that's what my perceptual system is

telling me and I can do that by telling you what would normally have caused caused what my perceptual system is telling me and that's what I think a mental state is it's something out there

in the world but it's hypothetical so let's take let's suppose I drop too much acid and I have the subjective experience of little pink elephants floating in front of me and I

tell you I've got this subjective experience of little pink elephants floating in front of me I want to explicate what that means without using the term subjective experience so I think the little pink

elephants aren't mental OB made the spooky stuff called qualia in an internal theater I think what's going on is my perceptual systems telling me something that I don't actually believe

and in order to let you know what my perceptual system is telling me I tell you that if there were little pink elephants out there in the world floating in front of me then my perceptual system would be functioning

properly and notice what I just said didn't contain the phrase subjective experience so I think a subjective experience is simply a counterfactual description of a world

such that if the world were like that your perceptual system will be working properly and let me show you that um these chatbots can have subjective

experiences too so suppose we have a chatbot that can talk and has a camera and has a robot arm it's all trained up and we put an object in front of it and say point at the object points at the

object no problem but then we put a prism in front of its lens without it knowing and we say point at the object so we messed with it spefic perceptual system right we say point at the object we put the object in front of it and it

points over there because the prison bent the light rays now I think it's perfectly reasonable to say that the chatbot had

the subjective experience that the object was over there and if you ask the chatbot that's probably what it would say if it hadn't been trained by all this human reinforcement learning to deny it had any objective exper

subjective experience um so that's the end of I want to say thank you very much we move straight

on to our second speaker Dr ignore that dig at American and uh move into the talk so in the

spirit [Music] [Laughter] of any better no

any better any better yeah maybe all

right so I want to take a different lens on technology but I want to kind of stress something that um Professor Hinton

stressed very much in the spirit of the uh Ralph millerand lecture so Ralph millerand was a sociologist and political Economist he was a Marxist and

he was very concerned with the ways that Elites exercise power over non- Elites and so I think it's very helpful when we look at Ai and Technology to take an

angle on it where we consider it as technology and we kind of ask a bunch of ethical questions that technology you know has raised throughout history and continues to raise so this will be not

treating these systems as agents I think they're of important questions there about intelligence we're just going to think about them as a form of technology so we get kind of I think two main

insights from you know Miland and those who came before and after him so the first is that technology very much shapes how we live together and is

shaped by it so all these images were created by Microsoft's image Creator so this I asked it to make me a picture of the leites it didn't know who the leites

were so I described them so the Lites were a movement in the industrial revolution where people went around destroying Machinery in factories and the Lites are often much maligned right

because people think ah they were just anti- technology right they're the kind of people right now we like we should stop AI progress we're kind of just anti- technology but actually what the

Lites were against was the ways in which technology was reshaping their social lives so the Lites right were very fond of produ in the home all production historically you know in this era was

done by you know all the individuals that lived in a home usually of multiple generations and all of a sudden they were being told right some of you have to go work in factories others of you

will do all the home production we got the gender division of labor out of that and the lights really didn't like this so I think when we look at technology right we're going to ask want to we're going to want to ask these questions of

power we're going to want to ask these questions of how technology enables us to live together and whether or not we all want to live in this way and you

know as the kind of you know Lite saw and Marx and and subsequent marxists like Miland have stressed control over technology is a key form of economic and

social power if I'm a factory owner and I control all the capital I can compel all of you for example to come and work for me but also you know if I control the means by which you see information I

can control say the information that you see so I'm just going to go quickly through four provocative claims about AI so first claim AI is not magic second claim

AI is not neutral third claim AI can either reshape or entrench economic and social relations of power either through the creation of these super intelligences or in more M mundan ways I

think and fourth we need more genuinely Democratic control over Innovation all right so just to run through these very quickly so here are

two kind of lenses on AI that I find to be very helpful a thing I think we sometimes forget right AI is science whether you think it's a subfield of computer science a field of its own

right we can use our criteria for what good science is and I think we can get a lot of purchase on how we ought to be developing AI in light of those criteria right so we want to evaluate the

assumptions that are built into a system we want to avoid say scientific monocultures where everyone is taking the same kind of apprach approach we want transparent science we want critical science and so on and so forth

we want to be clear about the values that are going into our science we also want to and I think you know uh Jeff has also talked about this right we want to

think at the kind of level of Science and the way in which science is organized we want to think about how how can we kind of move scientists to work on socially important problems so a

number of my colleagues work on the credit economy and science right science are really motivated scientists are really motivated by getting credit for a discovery because then you get that job that promotion that raise credit at the

moment is usually attached to kind of building you know better AI systems you don't get much credit if you do that like ethics fairness robustness kind of stuff that a bunch of people do if

they're passionate about it right we clearly need to realign our scientific incentives for scientists in order to redirect uh you know scientific Talent

towards a variety of research project so I think by taking the lens of AI as science we can think through all of these questions we also want to take the lens right of AI as kind of engineering

we you know I think it's helpful to think about systems as you know uh embodied as analogous to us and in particular right we want to think about all of the AI powered products that we

use they have a labor supply chain people responsible for curating all of the data that went into building these systems they have energy costs that are contributing to climate change the

systems themselves impose harms and risks on the people that use them we need to think about all these kind of engineering questions as

well so you know second claim AI is not neutral so I think there's a tendency right to see a lot of scientific outputs

or AI systems as more objective than human beings less fuzzy less biased but here we can draw on some you know insights from feminist philosop of

science STS and look at the ways that values often get built into systems so you know for example when we think about uh kind of building a system to do a

predictive task say we want it to predict the kind of healthcare need in an area right we want to use our public resources in a good way well that's not a task a machine can can do right

because it's like well what's Health need right how do I use data to speak to that can you operationalize that for me in a way that I can actually do I the machine can do this predictive task so

the scientist is like okay okay what's a good operationalization well maybe I want to predict health care cost as a kind of proxy for need but of course right we know that some populations that

don't have access to healthare resources don't use them a lot don't generate high costs doesn't mean there's not high needs right so the kind of way in which I operationalize the problem assumed a

bunch of values about what a good operationalization was who matters all of these kind of things and we can go through for each of the kind of stages of the scientific process the way in

which values other than truth empirical adequacy or knowledge shape how AI systems are built so again that gives us kind of to this way of thinking about it right leads to at least the falling two

things AI scientists those building these systems right have a kind of moral responsibility to avoid harm to Think Through the ethics of the project they're engaging in as well as you might

think well scientists should really be drawn ring on a diversity of values and importantly those values should be transparent to non-experts so you know if you're a data scientist you're

building a system to you know predict recidivism for the criminal justice system right uh the public you know judges those who work in the criminal justice system also have a stake in the

values that are going into your system and you need to be transparent about the values that went in and the ways in which the system kind of uh realizes those

values all right third I think uh potentially provocative claim AI enables social and economic

power so you know what does AI do for us at the moment well it enables more accurate prediction and more efficient processes right more efficient decision

making but of course right being able to do much more accurate prediction uh these kind of efficient decision processes give individuals who are able to use these

systems to make more accurate predictions do more efficient kind of processing enormous power not only economic power over what's produced decision-making power but also power

over the information we see over our physical environment and I think what's really interesting and again kind of Jeff talked about this right I think

what makes us all so nervous but also is um potentially you know uh might not be a negative is that AI could radically

reshape relations of power so if you thought you know we live in a kind of Highly unequal society in which some people call them Elites exercise power over some other people call them non-

Elites I think we're in a kind of interesting moment where the elites you know said so far not been threatened by automation right they're not the people working in factories whose job was taken

away by robots that were introduced now they're the people working in universities journalists right all of us where suddenly you know content creating machines are able to do our managerial

our kind of knowledge work better than we are and there's the question of whether or not we should kind of Welcome this given domestic and Global inequalities not to say we should kind

of hand over the keys to robot overlords but I think this kind of moment of building systems that are you know starting to look intelligent gives us a

moment to think well how is power being distributed amongst all of us how ought it to be distributed amongst all of us and then how should we kind of relate to

these systems that we're building so right what do we do about all of this so this is a delightful

quotation that I have from my uh kind of um colleague Henrik Kleberg from an uh article by Patricia Lockwood where you know if you've been to these a ethics

talks right you've seen uh a number of statements so even a Spate of sternly worded articles called guess what tech has an Ethics problem was not making Tech have less of an Ethics problem oh

man if that wasn't doing it what would so what can we do about this right so I talked a little bit about the scientific Community about realigning scientific incentives to encourage the kind of

research we want you know as Jeff mentioned right we see a problem with governments governments want to regulate all of us but they don't often want to regulate limit their own power so I

think right we need take a kind of insight from this unwillingness of governments to do that and demand a kind of democratic control over AI so I think

the first thing to recognize and again this comes with the Insight of treating this like a technology AI development is very much shaped by our political economic and social institutions this is just a point

that economists make generally about technology that the technology that we get the technology that we take up depends on these background institutional social cultural factors we

shouldn't be kind of technological determinists and think well we're going to end up with this technology anyway and we're going to end up with a specific form of it because it's actually our background institutions

right that are doing most of the shaping of what we end up with and importantly right we ought to recognize that profitable automation is not the same

thing as socially good automation so just two examples of that so ever since at least the kind of mid 20th century but actually even before

kind of you know feminists critical race Scholars have pointed out that actually we do some of us do a lot of domestic housework don't need to that could be

automated so this was a point that Angela Davis made so you know we saw these wonderful experiments in the kind of early 1900s of large scale laundry mats so laundry mats exist right you can

bring your laundry somewhere it's done by a combination of machines and paid labor and then you pick it up and take it home most people don't do this right most domestic labor around cooking

around laundry around other cleaning tasks is done by the individuals private individuals who live in that household often it shared highly inequitably and there's a question right well why don't

we automate all this drudgery all this bad kind of Labor answer it's not profitable to do so so all of these kind of large scale laundromat Ventures

folded it's also a problem of or it's an effect of social norms so I think there's some kind of intersection of these two things that we can talk more about but the problem of domestic work

is it's not that profitable to automate so we're not automating it but we look around and think well we'd all be better off we'd all be happier if we did second example so this is something like that economists like Darren asoglu have

written about we're seeing a lot of automation right with automation comes you know work tasks that are taken away from us but often we get new tasks and often those new tasks are not tasks that

we particularly would like to do so you know the uh robot in the Amazon warehouse is not very good at you know picking things up when it drops them so weo those tasks we need data lab rulers

or people to do content moderation and again right the question of and you know they argue asogo for example has argued that the problem is that in a market

economy the incentives to automate are to make things cheaper you know we talk about productivity and we kind of pretend that automation always makes things better it doesn't it make things

cheaper for companies companies will automate when it's cheaper but often right they're not doing so in ways that even makes the work much better and often it's not done in a way that makes

people's subsequent jobs much better so again right we need to take a kind of lens of how can we shape our institutions in a way that we get the right kind of automation the right sort

of AI and I guess the final note that I want to end on right governments around the world are kind of scrambling to capture the economic growth that they

hope AI will bring looks like China and the us is going to get most of it but I think often countries are scared right to regulate to make policy on this issue because they think we don't want to

stand in the way of innovation and subsequent economic growth and democracy is always the enemy of you know um economic growth and here right we might

say two things first of all empirically highly contestable whether democracy is the enemy of economic growth when it comes to Innovation I think the jury's out there and second you know again from

a kind of Ethics perspective we always want to ask is this the kind of innovation that we want just because we can do it art we doing you know ought we

to do it and is this is this a kind of desirable or sustainable sort of growth all right I will end there thanks very much

[Applause] right thank you very much um so look what I'm going to do is I'm going to pose a question um to each you to try and sort of bring the two talks a little

bit into dialogue with each other and I think what I'll do is I'll start asking um you Jeffrey something that occurs to me having listened to Kate um you know perhaps you might want

to respond after you've heard what he says about that and then I'll do it the other way around so one of the that came out of what Kate was saying was that it was important to realign the incentives

confronting scientists and she also spoke about the moral responsibility of scientists and that struck me as something that's particularly important in a period of

pioneering science where a few individuals can have a sort of outsized effect you know everyone's been seeing the film about the nuclear bomb and so on there there's a little bit of that

about it so what I wanted to ask is I mean what are your Reflections on that how important is the type of person that occupies this pioneering role in this

particular situation um you know maybe it's status that drives people you know who's going to be first um or maybe it's a particular type of intelligence you a

particular kind of mathematical intelligence perhaps anyway I'd be interested in your Reflections about this this interest about what given the outsized role of the scientists

themselves how do they exercise their responsibility and what does motivate them so I should start by saying I'm a scientist I'm not a sociologist um so I'm not an expert at

this but I have my own opinions of this um so a lot of the scientists driving this stuff forward are what most people would call nerds

um many of them have problems minor problems or more major problems with empathy um

they're very good at numerical things um I strongly feel that scientists have a responsibility for things they develop

to worry about how they used um some some of the scientists doing this don't um they're just but most of them I think almost all of the ones at the Leading

Edge what they're really driven by is basic scientific curiosity people are just doing it to get public patients or people who are just doing it to get status just don't

do research as well as people who are driven by curiosity so maybe now isn't the time to raise this but there were a bunch of Victorian

gentlemen who had private incomes who were very good scientists like Darwin and they were doing it out of pure curiosity and I've always encouraged granting agencies instead of funding

scientists um to do big projects that will help the economy they should fund at least a lot of the funding should go to basic curiosity research and that's how we got this AI it came from funding

people who did neural Nets like yanuk yoshu ban and me and a whole bunch of other people um to do basic curiosity driven research and it worked the same

with big farmer that came out of discovering the structure of DNA which was not done in order to make big farm it was done out of basic

curiosity so my summary of these people is they're a funny bunch of people they have to be very good at numerical stuff typically they have to be very good Engineers they have to be very

driven um they have to have a lot of basic curiosity and unfortunately some of them don't have as much empathy as they might no thank thank you very much so I

mean mean just let me interpolate and then you you respond to that because in a way that suggests that it's sort of pre-institutional it's almost pre-political economy I mean you you were sort of saying well it's Elites

it's institutions it's profit and so on but it's actually a small number of people possibly who had driven I mean it becomes those things but it doesn't start that way so how do you respond to

your own concerns about incentives with those sort of people yeah good so I guess I would say a few things um so I think this is kind of largely building

on what was said so I think you know first note that it's uh you know at a kind of institutional level right it's a um it's an economic fact that some

people are able to kind of like pursue this research and have the time and space uh to do this you know it used to be privately funded wealthy individuals now fortunately we have more

meritocratic universities and this is the important social role that universities play is that they free people from having to think about well is what I'm doing now going to help my company in the short term

but they let them pursue kind of long-term projects of curiosity which we know benefit us all Society so I think absolutely and this is a kind of call

for right more government funding for universities we can't just let the kind of cuttingedge AI research happen in the private sector it needs to happen in universities as well and universities

need to be competitive so I think that's the first thing I think it's kind of very compatible I think the second right important point about that is that

um what this allows for is a kind of diversity in our scientific practice so what's wonderful is that we have a bunch of different kind of scientific

approaches and we continue working on them because we never know quite what's going to work right we shouldn't all rush to do deep learning now just because we've had a number of successes right this should be a research program

that gets a bunch of funding but we need a diversity of approaches both within computer science and other disciplin right also engaging with what computer scientists are doing and then finally

you know I think the way to speak to scientists who absolutely mostly are motivated by kind of curiosity and motivations for discovering the truth

doing kind of good things in the world is to say you will do better science or in doing science often you're considering values other than truth

empirical adequacy it's just you're not aware that you're doing that scientific practice doesn't encourage you to make the explicit and you will do better science if you consider those values so

I think often times right the kind of way to speak to scientists and meet them they where they're at is to show how these kind of ethical tools will help us all do better

science okay thanks I I'll just quickly make a followup and see just if I mean my son has been reading Frankenstein by Mary shell and Mary Shell's husband and

Mary Shell's father really believed in science and Mary sh was a bit skeptical about it and she presented if you know the novel Dr Frankenstein as someone who

was driven in a way by this need to Excel and cross these boundaries so I'm not clear how just sort of urging people to embrace these moral concerns is going

to change the outcome if what people are really driven by is this curiosity that that you've said I mean I don't want to spend too much longer but if if you would either of you would like to say something about that quickly feel free

but don't feel obliged I think I can speak for many scientists when I say that if someone tells you you want to be doing something

more ethical your first reaction is say stay out of my way I'm doing science that's the first reaction and I'm I myself am not convinced by the

argument that if you do something more EIC or you'll do better science um I'm not convinced it'll be worse science but I just think it's an independent

Dimension did you want a few more sentences or you happy um yeah so I think often when people say you need to

do better sort of more ethical science um the what they're pushing is you ought to kind of pursue socially important problems or you ought to take a certain

kind of approach whereas I think instead right what we need to recognize is that scientists are making all kinds of normative judgments often they're epistemic what kind of assumptions will

better lead me to the truth but often the way in which we're say uh taking ambiguous evidence and resolving that evidence it's a value-driven matter where we rely on values that aren't

clearly just these kind of truthy values right when we say well we prefer a theory that's simpler over one that's more complex do we think that's necessarily because simpler theories are more true you could try to give that

kind of account but I think it's difficult to so I think it's helpful to recognize that scientists bring in a number of values when they're doing science and it's not so much a matter of like you need to do research that's

ethically important you can't be driven in a certain direction just by your curiosity instead it's a reflection on what are all the values that you're drawing on let's be kind of explicit and transparent about method and open to

critique about the method that you're using Okay I I want to just agree with you yeah yeah I think there's very good basis for believing that simpler theories are better um something called

minimum description length which it turns out is equivalent to basing um theorizing and it basically says that in the limit um the simplest theory is

going to be correct yeah so I maybe we can talk more so I think um maybe just a sentence or two you can turn to oam's Razer next but

it's kind of yeah um aam's Razer was just a kind of claim but there's a whole bunch of mathematics now that lies behind the idea that simpler theories are like for

example if you're going to play a betting game um a basing will always win at a betting game if a proper Basin Min description is equivalent to being a proper Basin

so uh yeah no sorry apparently we yeah I've got to speak more into here um Kate you just want to very quickly because I want to have another round quickly and then give the audience

ago uh results depend on assumptions okay there you are that is concise um uh no that is even more

concise mathematical theorems do not depend on assumptions it's if it's a it's a theorem or it's not a theorem okay now someone has to have the last word so I can ask the next question um

so um but you can't hear okay can you hear now when I do this oh you you have to speak I don't know you have to yell or something but

don't wait wait till can you hear this okay I have to put it very closely Terri okay now look I've got another question and then I want to turn to the audience so one of the central things

that Jeffrey Hinton was talking about I address you first of all um you know he he he listed a series of risks and the one he spent most time talking on was you know really very serious risks and

existential threats now I don't know 15 or 20 years ago there was a big debate in moral philosophy about genetically modified crops and the outcome of that

was to suggest that there should be a kind of precautionary principle adopted well there was a debate about it it never really got resolved it sort of fell away a bit but I just wanted you to

reflect on is that something that ought to be being applied here I mean there the issue seems to be about the extent of the effect but also the irreversibility of the effect which

seems to be very gerain here what would you say about is that a sort of background way of thinking about it or is a special case nuclear weapons like yeah so I think absolutely we

should be kind of using our tools from decision theory in order to think through these risks I think that you know we have reason to pay attention to even very small probability risks that

have you know High degree you know where the outcome would be very very bad uh at the same time maybe we'll we'll kind of add two elements one um often when we

face the the kind of tricky thing in these situations is we're just in a case of severe uncertainty so often we can't even describe the kind of states of the world that we're concerned about and we

can't do anything like attach precise probabilities to them so often what the kind of precautionary principle I think one way of taking it is a way it's kind

of encouraging us to gather more evidence before we make decisions um I think a a second

kind of problem is it's a general problem for any kind of broadly consequentialist theory is that it asks

us to take you know kind of like future very very bad risks with a very very small probability very very seriously but in a way that we recognize that it will very very much harm current people

now so if you took a kind of overly precautionary approach right so right Jeff mentioned AI benefits in healthcare if you took an overly precautionary approach we'd have to say well we need

to kind of like you know um stop everybody leave the AI Factory we can't do that anymore we can't have all these kind of benefits and then we might recognize well that's overly

conservative so I think that's why we have a reason not to be too precautionary right and to use our kind of standard tools from decision theory in order to think through these problems but yeah I think it's a very helpful on

on these issues did you have any thoughts about that no but I've thoughts about something else that to say and it relates to what you said about jobs um

so I'm less optimistic than you I think that um what's going to happen is when a lot of intellectual labor gets replaced by

machines the people who own those machines are going to get a lot richer people like musk and the people who get replaced are get a lot

poorer and so it's going to increase the gap between the rich and the poor and I grew up in the 1950s um which was right after the second World War I can say something now

which will reveal that I am definitely not a professor of political science um so we were taught at school that that

the reason for fascism in Germany in the 30s was that um the Treaty of Si made the Germans very poor and I believe KES thought that was going to happen too

um so you may have noticed there's a lot of fascism resurging now um it's kind of extraordinary I never thought I'd live to see The Return of fascism and you might ask why what's the equivalent of

the Treaty of Versi and I think the equivalent of the Treaty of Si is Clinton and Blair so under Clinton and Blair who were

Democrat and labor the gap between rich and poor got bigger it should have got smaller and it got bigger and they were the ones who should have made it smaller and I think that's a terrible

thing and if you look what it's done in the states it's awful um you've got the gig economy so I think AI is going to make that much

worse and it terrifies me yeah just to jump in so I absolutely agree with you so I think you know uh and this is a problem for our background institution so the problem is kind of governmental

and economic failures that lead to you know as David aor has pointed out this kind of polarization effect that we're hollowing out these middle class you know middle- income jobs to very very

few High income uh kind of upper class jobs and everybody else is being shifted downwards and that you know is very much the current Trend right and so I think this is why again we all as kind of

citizens in a democracy or if not a democracy citizens with some kind of in a country where you might have some political influence need to write demand um you know look at the kind of

again our background institutions that are enabling this kind of inequality and I'm I don't know if I'm optimistic about change or not but in any case it's very clear that change needs to happen because I agree this

absolutely is the trend yeah yeah look thank you very much that's very Illuminating conversation well it's it's time now to turn over to the floor I I suspect there'll be a number of people with questions so I just first of all

need to tell you when you ask a question can you say who you are and where you're from in any institutional affiliation because we do have a large online audience as well and they would be useful for everyone to know I'm also

going to have to take some questions from the online audience so um don't despair if it goes backwards and forwards a bit I'm just going to take um three people if you could all be super concise we'll get lots of um questions

in so can I start with this woman here at the front and then I will go to the gentleman with well there's two

gentlemen with glasses um in the middle there okay is it on yes thank so much uh big honor to get the first

question my question is for Jeffrey so I'll wait till you get a microphone or give it back to you uh so yeah I'm Lily Müller I am a postoral fellow in the SCS department at Cornell and in the war

studies department at Kings college and my question is super quick it's just to point you may Jeffrey about um the super intelligence and exential risk and I I

wrote it down to be very precise um I'm just wondering why do you think that super intelligence will have the desire to compete or take over so why does it

have an aggressive nature is that because humans built it or do you have any more experience from your time working with it hold that thought if you don't mind just just because otherwise

we'll not get through all these F just you need a pad um so this this yeah my name is uh y m Hui uh I'm a child accountant by profession um but I'm very

interested in AI um my question is this um is it possible um if we do achieve AGI is it possible to codify things to

the robot things like for example Asim of three laws of uh robotics right mathematically and all that sort of

thing um yeah do you envisage a situation where that could be done technically all right let's let's stop there otherwise you yeah you kind hold

them all together so one two is my microphone working there what has happened to the speaker took

my okay um I I think this is some kind of test of whether you get flustered can you just repeat the last thing you said okay yeah it's about why

do you think that the super intelligent oh okay good so I think that will certainly happen if they start competing with each other if you if evolution kicks in then even if one just has a

tiny any bit of tendency to preserve itself that one will do better and so if evolution kicks in I think it'll happen

um without Evolution I think it may happen because they'll create sub goals for themselves and a very obvious sub coders get more control so

what can you say in one sentence your question say oh um can you cify um no no you can't um so for example if you look at one of

those laws probably the first one which is don't hurt humans okay one of the main funders of this is defense departments building battle robots I don't think that one's going to be in

there um then there's the problem of the alignment problem which is it's very hard to uh predict the consequences of asking it to do something um

particularly if it's much smarter than you but a classic example would be you ask it to to see if you can slow down climate change now if you don't if you're not too careful the obvious way

to do that is just get rid of people okay I'm going to take a a question from the online audience this is from um Deborah Dean in Victoria University in Wellington New Zealand um

I'll put it to you in the first instance assuming that generative AI depends on the cumulative sources available digitally to learn from isn't it just stealing analog

sources from people who have contributed the sum total of knowledge do you think they should be paid for their contributions this obviously is an issue

that's currently being considered in various courts and so on what would you say to that yeah I think um time to revisit the moral foundations of

intellectual property law a lot of them are around incentivizing companies to kind of um produce uh you know goods and services that we all want because they

get rewarded for taking on those risky operations uh in this case right it seems like we might get a lot of social benefit through open sourcing of data and sharing of data in which case we

just have reason to do that second there's been a big kind of data is labor movement I think that um we might have a

number of worries about that we might think well people who have uh very few kind of options for good paid work might be pushed into doing this kind of data

is labor I think there are a number of kind of concerns that we might have about that but it is clear that um you know a small number of companies are

profiting from the kind of information that's freely available on the internet but I'm not sure that that should lead us to think we should make information less available or usable for scientific

and Commercial Enterprise nor I think should we automatically jump to well thereby we should kind of pay people for their particular acts of generating

data okay so let's have some more questions um I i' just like to say I think that was a perfect answer we'll get back to the basan stuff

later now let's have some Concord um so um the person with the pale ha and the yes this woman here I think two seconds hello thank you so much for your

talk I'm Anna student in management here at the lsse um we talked a bit about Democratic control over AI so I would like to hear your thoughts on the EU AI

act and sort of how legislations of its kind should look like to control the risks we talked about okay so the I think the panel wants to do them one by one so let's just start with one by one but we might

have to go back to taking a few at the end or people won't have a go so was that directed at the UI Act I I'll tell you when I want to find out

what's in the ACT um I ask gbd4 yeah and I I think the um there's a lot good there right I think we might have two concerns one is

that right we're in this kind of problem where we need like truly Global regulation there are some outsize players who are having a say in that kind of Regulation

problem big companies like Google Microsoft are better able to comply with this regulation that might kind of um especially globally right a lot of smaller companies startup Ventures might

not be able to comply with it so I think we need to be kind of careful about who just you know having a number of players on the table and thinking about

compliance and compliance issues but I cannot um yeah leave it there okay now

um there's a a fellow with a brown leather jacket yes you yeah thank you so um John K and assistant professor data science here at

LSC um I think I share the views of my colleague Kate more than jeffre is I believe uh but jeofrey Hinton so my question for you is the following so I understand

I do I don't agree when you say that machines like deep learning they reason I I can see how that happens but when you say that they can have desire or

they do have the desire to do something I don't see where uh this mechanism is in the Deep learning architecture or the machines themselves where is the

mechanism for desire in the in the architectures we have okay suppose I have a battle robot um and it tries to

kill me that is it creeps up behind me When I'm Not Looking and tries to shoot me and soft spots um I think it would be perfectly reasonable to say that has the desire to

kill me if you look at alphao alphao wants to win I think we many people including

myself sometimes are inclined to think of these sort of internal Essences that desires some kind of internal Essence I think it's just um

these things have goals they try and Achieve those goals that is desire okay um yeah the gentleman in white here at the front if just wait

till the thing comes and say who you are and where you're from and so on hello so my name thank you Professor Hinson my name is Barry I'm here as an independent uh in my personal capacity

but my background interested citizen yes I'm citizen I've got a background in Psychology and Physiology but um so it's along the lines of the same question really so animals care about things and

people care about things but outside of human programming saying this is your goal can you imagine if you park for one moment this idea of self-preservation

can you imagine something in a lifetime of a of a a model which um is uh well yeah a desire where what other things will models actually come to to care

about can you imagine anything that they'll come to care about by themselves in and of themselves well obviously if they start evolving that is if they start competitively evolving they will

care about all sorts of things they'll care about helping their offspring um they'll care about um getting rid of people other beings

that are challenging them from resources I think that's all going to come if they evolve as it did with us I mean if you ask where did it come from and us

It Came From Evolution I think um so From Evolution it could come um but I also think they can um arrive at these

things just by reasoning too so if they want to achieve something um they can create sub goals and once they've got those sub goals those act like desires

right they want to get they want to achieve those sub goals uh okay um yes let's have this person over here just what

yeah thank you both I'd like to come back to that first exchange and I should say I'm Julia hus but I'm also here in a private capacity um so I'd like to come back to that first exchange about

science and curiosity and values um and I'd like you to imagine two scientific studies uh the first study studies um or

examines um risks for cardiovascular disease um in men between the ages of 18

and 85 the second study examines risk factors for cardiovascular disease in

men and women between the ages of 18 and 85 where uh women's menstrual cycles

modulate uh cardiovascular function and therefore make the findings much more complex and harder to

interpret which one of those is better science and which one of those addresses our curiosity more when it comes to understanding something like heart function I know what the wrong answer is

here I can imagine a statistician saying he'd much rather analyze the first one um but I think you learn much more from the second one it's just

complicated and that's what we did right for decades and decades and decades um even today women are underrepresented so I think just to say to come back to the point I think there's both there's lots

of ways of coming back to these questions of of value um in science um but it's a matter of where we interpret them or something like that and the question is how much space do we want to

give them in AI as a science or something like that you cheap did you want to comment on that too k no

no okay gosh there are 101 people we're going to have trouble but people are being super concise which is very good for everyone else the person with a purple

shirt hi I'm Lenin I'm just a bloke and uh this one's more at Jeff so your talk seemed to be built on the assumption that the usurp of digital intelligence

should be prevented due to Superior moral worth of humans but given that you claim that AI can have conscious experience do you believe that we will always be morally Superior and could you

sort of discuss your underlying philosophical stances that get you there it's a very good question um I try and avoid the issue of political rights for

AIS because it's a very contentious issue that will lose most people will think they shouldn't have rights if you look historically it took a lot of violence

and a lot of time for people whose skin was a different color to get rights it then took a some violence and a lot of time for people whose genitalia were different to get rights if you think how

different these machines are the conclusion would be if they really want to get rights it's going to be extremely violent and extremely nasty um so I prefer not to talk about

that yeah and I guess right just just to build on that so we we wouldn't want to be kind of anthropocentric in our granting a moral status but we notice right that we tend to privilege our own species we treat

animals horrifically there's probably not a kind of good moral justification for the various treatments that we give them but then a second thing I wanted to point out so um it's important to

distinguish between the moral duties that you have to a sentient creature once they exist and whether or not we have moral duties to bring about those sentient creatures so you might say well

we might have moral duties to the sentin creatures if they exist but it's not privileging humans in order to say we don't have now have moral duties to those non-existent senent creatures and we don't have a moral obligation to

bring them about and maybe it's not even morally permissible so I just want to distinguish those two questions and given that we're in the stage right where those creatures don't exist we can productively ask the kind of moral

questions without assuming anything about the kind of superiority of human beings I also think that we're people what we care about is people we don't

care about them um and um we should treat them like we treat animals um that's my gut foring um we may not be able to get away with

it right I'm going to take a question from the online audience this is from um Emma Wang who is a second year undergraduate in social policy it's quite a short question start with you do

you think the use of AI will exacerbate or mitigate educational inequality you were talking about inequality like the attainment Gap but among the global South

countries could do either but I'm pessimistic just as with jobs I think I'm I'm the opposite I think I taught one of the first corsera

courses um precisely for that reason M because you didn't have to be able to pay to go to a fancy University to take the course um I think with AI we're

going to get extremely good personal tutors it won't come for a while um and the rich have always had access to good personal tutors and you I remembering

some paper you learn about twice as fast if you have a good personal tutor as if you're in a classroom and I don't see why we shouldn't have um very good

personal tuts for everybody if we can make this stuff cheap enough um there's something that relates to jobs too that is very relevant to this there's some kinds of jobs where if an AI does the

job job you replace the people and that's it there's other kind of jobs which are kind of infinitely elastic so medicine's like that if I could have 15 doctors working on the problem of this

funny lump on my cheek I would have 15 doctors working on it there's no end to how much medical expertise I could consume um so one has to distinguish

when one talks about AI getting rid of jobs um what kind of job it is whether it's something where the demand can just go up indefinitely and for tutoring I

think the demand can go up enormously right yeah right um so let me take another question I'm going to start with the person with the purple jumper and then

I'm going to go to the woman with the glasses um twoth thirds of the way down my name's David wood from London futurist my question is about open

source and AI because Professor Hinton you raised a very important point that it be very dangerous to allow anyb to experiment with open source AI because

they'll often not be inclined to follow the health and safety rules or the political correctness rules or whatever but it seems we're not getting any convergence on this we got people from

meta and Elon Musk as you saying they're not just saying well it might be dangerous but on balance it's all right they're being Evangelical and saying absolutely we must release this so my question is do you see any possibility

of a convergence an agreement amongst the powers that be on this from your meetings in Bletchley Park or recently in Beijing or maybe in a couple of

months time in Soul will there be a convergence that they we we're going to be have to be careful with open source as well I think we can get convergence

when Yan realizes that he's wrong why doesn't he realize it part of it is he works for Facebook

um and he he's of the belief that the good guys will always have more resources than the bad guys and the thing to keep bad AI under control is

good AI um but we don't necessarily disag we don't necessarily agree on who the good guys are Yan thinks Mark Zuckerberg's a good

guy okay um the where is that person who yeah there wave your arm around so they can see thank you um I'm a uh my name is

Carla and I'm a student in um the media and Communications Department um postcard student and I was wondering if um when we're talking about llms and

generative Ai and hallucinations if there's like the knowledge it produces and the information it's generating is it um is there a limit to it or is it

infinite like the limit being the algorithmic um biases or the algorithms that generate this information uh or

could this be at infinum of knowledge but then the risk of fake information being created how how big is that risk

I'm not sure I understood the question is can I can I try and refine the question is question if we made these networks bigger and bigger would they just keep getting more and more

knowledge yeah yeah like the the main um point of my question is like the risk of hallucinations and how infinite the

knowledge it creates or generates can be and the potential harm behind it um when because it's based on

algorithms so how remember these things aren't based on an algorithm that is as a learning Al algorithm there's a way of taking the data and deciding how to change the strengths of connections

between neurons um and things a bit more complicated in that um that's an algorithm that's somebody programmed that and that's how it learns but what

it learns depends on how that algorithm interacts with a lot of data and once it's learned it's just a bunch of features and interactions between

features that's not an algorithm that's like a person with intuition um and it does hallucinate um so do

we so most people are unaware of just how much we hallucinate I just one example of that if you take John Dean

who testified at Watergate um he testified under oath before he knew there were tapes and it could have been very embarrassing for him and in fact a lot

of what he said was just playing wrong he said this person said that and that person said this and these people were in this meeting all of that a lot of that was just wrong but the gist of what

he said was correct what was going on in the white house he conveyed that very well and what he was doing was he was making up stories that captured the gist

and that's what we do when we have memories too if you have memories to something very recent you make up a story that's probably true if you have memories of something a long time ago you make up a St story that we'll have

all sorts of details that aren't right but you don't know which ones are right you you're as attached to the details that are wrong as you are to the ones that are right we're just like these

things and people think somehow we're not okay I'm going to take one more question from here then I'm going to go to the online audience again it's a I think a woman with a blue top on do have

you got a coat or something yeah just wave your arm around yeah thanks um hello my name is Lara I'm

a master student of cyber policy at KCl and my question is uh refers to a previous one but um I think it's worth

asking anyway uh so I think for a lot of people it's fairly unintuitive how a potential super intelligence could get from an overall goal so a task we may

give it or whatever to a subgoal or an instrumental goal um and especially how we can predict what those instrumental

goals might be before we see a scenario like this play out so uh I would love to hear maybe an elaboration of that specific

scenario that's called the alignment problem right the alignment problem is if you give it a goal um do you know that the way it achieves that goal will

align with human values um it's a very difficult problem lots of people are working on it one of the difficulties of that problem um is not

just predicting what sub gos it'll come up with but if you want them to align with human values that's sort of making an assumption that there's one thing called human

values so there isn't one thing called human values some people think it's okay to drop 2,000 pound bombs on children and other people don't think that um

so how are you going to align with that right did you want to comment you you happy um I'm going to take a question from an uh from the online audience this is from Jen XI Zang who is

a a student here at the LC in political sociology so there you are um now you said um you said Kate that um there was

the potential for political power to be more egalitarian in its distribution this question has some bearing on that will artificial intelligence replace the role of the expert in bureaucratic

systems will it cause further development of a technocratic state it's a very nice concise

question um yeah so I have to say um again I'm not a sociologist so I'm just going to say I think the kind of answer to these questions is just going to depend on the shape of our kind of

background institutions whether or not I I agree that say AI has like a huge promise for personalized Tut tutoring and education and it's just going to depend on these questions of do we have

the kind of background internet infrastructure to people have devices is it in a language that they can understand is it kind of tailored to social norms that will help with personalized tutoring so we might say there's a kind of huge potential right

for things to go either way and it's going to depend and just a kind of pitch for the lsse right the answer to all of these questions will depend on doing a lot of hard social science and understanding will give in the shape of this this particular bureaucracy in a

particular country given what its institutions are like what do we think is likely to happen so I have one comment about that is we probably all

remember how Facebook caused the Arab Spring but that was very shortlived and pretty soon governments were using surveillance and AI to control people

and that was a much bigger effect than the Arab Spring effect which would have been an interesting discussion to have between you earlier I it occurred to me um the fact that you had remove the two people from your presentation is is

relevant in itself I'm just going to very quickly if you can just make three super quick questions and you just choose which ones to answer because we've got to stop um fairly soon so um

the the fellow here with the glasses and the pale striped shirt um and then um who have we got over here someone who's sort of peace

peace yes you um and then we'll take someone at the back who I haven't worked out yet hi uh I'm Victor Veno I'm a business person I have a question about

creativity you said AI humans are arrogant about what K I can do cannot do if you look at history in musicians at

authors uh you know literature in art there' have been people who have done breakthrough things that have completely changed the face of the art or whatever

you talk about b or so many people not too many examples yeah it's kind of when you when you look at betoven 10 Symphony which was created by AI it's bland it's

not betoven I mean it sounds like betoven but it's nothing emotional about it there's no emotion being conveyed can AI do creativity that actually moves us

okay hold that thought Boven and then we've got um no no I don't like this we should answer the questions when they come well we're going to have to stop in about two minutes so yeah but it doesn't make any difference question off you go

off you go it depends how concise you are in your answer so my answer to that is move 37 if you look at Alpha go that was a highly creative move that professional go players had never

thought of and were standed by I don't see why that shouldn't happen in all sorts of other domains when you get something that's as much better than us as alphao was than the average go play

creativity is about problem solving air is great at solving problems thank you very much is it is it working yeah so my name is Reggie um a head of governance

and strategy and financial institution and my question goes uh in because Jeffrey you mentioned some of your pessimism uh related to the development

of AI and my question would be what kind of policies would be necessary that we can actually profit from the incredible gender uh productive gains of

AI and actually use it to reduce inequality in the world what what kind of system what kind of uh policies would need to be in place that we can profit from it I can give a very short answer

so [Music] socialism nothing to add to that this being the Ralph milleran program I think that is an excellent point on which to

stop um look thank [Applause] [Music] [Applause]

you thank thank thank you all very much for coming thank you to our online audience I particularly want to thank our two speakers who've taken us through a wide range of issues of of serious risks potentially very serious risks

also some thoughts about how we should think about it what we might do to get the steering wheel in our hands again so can you join me once again in thanking our speakers for

[Applause]

Loading...

Loading video analysis...