【分享】当AI比人更聪明,我们还剩什么?| 尤瓦尔·赫拉利对话李开复 | 中英双语
By 最佳拍档
Summary
Topics Covered
- Humans Rule Via Information Networks
- AI Agents Seize Editorial Control
- AI Target Selection Shrinks Ethical Space
- Algorithms Hack Human Fear for Engagement
- Prioritize Consciousness Over Intelligence
Full Transcript
hello Dr Lee and hello to the audience um I'm now in Paris and I'm very happy to be with you at least virtually uh thanks to modern technology and
information technology we can be in touch even when our bodies are thousands of of of kilometers apart yeah yeah it's great to see you again Professor Harari
uh we used to see each other every year or so but uh covid kept us apart and it's great to see you I'm a big fan of all your books uh especially the new one
Nexus I think the um information has been uh the most important element in business and work and human lives today
I can certainly attest that uh all of my work is about dealing with information and and people but increasingly the
information is all the strategy that we make as a company uh using datadriven decisions um we Ed information and Technology to create our thoughts to
make decisions our products our bits of information distributed as applications or software and uh we are paid using electronic means so increasingly the
world is going digital and virtual and what is transmitted is information and I think Professor Harari eloquently in in the book Nexus talks about uh the
history of information and then in the book we clearly see the use of information has jumped in the digital age well humans control the world
because we can cooperate better than any other animal individually we are not much smarter or stronger than other animals but whereas chimpanzees for
instance can cooperate only in small groups like 50 chimpanzees or 100 chimpanzees humans can create these vast Networks of cooperation between millions
and even billions of people this is the source of our power and we do that by exchanging information we don't know
personally most of the other people in our country in our Corporation in our business Network we stay in touch we get
connected we cooperate through information this is why information is so vital and we don't
really live in the reality we don't react to the reality of the world we react to our image of reality we react
to the information we have about the reality this is why information is so crucial and at this stage in history we
have invented the most powerful the most important information technology in history which is AI
the unique thing about AI is that AI is not just a tool in our hands it is an agent an active and autonomous agent
that can make decisions by itself and can invent and create new ideas by itself previously in history we invented
many tools and many machines but the power always remained in our hands especially the power of information if you think about the printing press
for instance so the printing press allowed us to spread our information our stories our ideas but the printing press
never made any decision like which book to print and the printing press could not write new books it could not create information it could only copy and
spread our Creations AI is different AI is is able to make decisions by itself and it is
able to invent new ideas by itself this is the characteristic of AI this is what defines it what you know there is a lot of hype a lot of confusion around AI
what is AI how is it different from previous machines or computers this is the difference that if something cannot
change by itself cannot learn and invent new things it is not an AI it is just an oldfashioned
machine so we rule the world because we created these networks of information but now we have created
something that might take control of the networks of information from us I'll just give one example before I I I I
give the the flow to to Dr Lee for his thought if you think about the role of editors for instance editors were some of the
most important people in the world whether editors of book Publications or news agencies or newspapers so much power you decide like from the million
things that happened yesterday in the world the job of the editor is to decide which are the most the most important 10 things that will appear on the front
page of the newspaper or that will appear on the News broadcasting television if you ask yourself who are the most important editors today in the
world what are the names they have no names because they are algorithms they are AIS the editors of the big social
media platforms like YouTube or or or Twitter or Tik Tok they are no longer human beings they are AIS so AI is
beginning to take control of the information network from us and I agree with Professor Harari that AI will cause
it to make another even bigger jump um my day-to-day work I already use AI for most things that I do and I think increasingly people are doing that but
many people think of AI as a chatbot or as a smart search engine which it is today but that is really just the first
manifestation we are already seeing examples of these give me an answer dialogue with me transforming to be my
co-pilot help me do tasks and that will transition further into digital human workers so taking over most or part or
all of the work of some individuals and that will then transition into near autopilot where the human instructs but the AI is free to create information
make decisions as agents and then finally autopilots will be agents that will on your on the behalf of a company or a person decide to do things that the
person may not have instructed but AI would view as something I know you want in some sense when Tik Tok sends a video that's a very simple form of that you
didn't ask for that video but it thinks you want it but but amplify that intelligence by a million times times that AI knows how you want to invest and
invest on your behalf AI knows your wife's birthday is coming up by the present and these are the innocuous examples and there are many that could
lead to um potentially some issues do you think that um we will soon see AIS making major decisions in the
financial markets like AI investors and AI coming up with new investment strategy is even new Financial devices uh yes that's already happened
to some extent uh a number of AI quantitative funds um in both us and China and other countries are giving excellent returns and these are
basically AI uh decision makers that every day looks at all the creating data and news and decide what's the best
stock I can buy and then it buys and sells without human instruction the only thing humans provided is here's all the data from which to make a decision and
the goal uh Mak me the most money subject to some risk require risk tolerance so that's already happening and if we look at all the tasks humans
do probably the most quantitative task is that of investment because virtually all the decisions comes from real-time
information and AI can read Le faster can sort through possibilities can make combination trades and can buy and sell
more quickly and with more ability to access information so when people read um that uh some president said something
and and think of buying some stock well AI will have already done it well ahead of the human but that's already happening so um would it be like in
other fields narrow Fields like you know in go in the game of Go we already know that AI is far more intelligent and even
far more creative than any human actor would something similar happen in a field like finance and if it does what does it mean
to society and even to politics if basically humans and human governments lose control of the financial system maybe they cannot longer understand the
financial system the major financial decisions so uh you still have human presidents and human Prime Ministers and
human Bankers as officially the heads of the system but they are no longer able to understand such an important part of
of the world the financial system and now no longer able to regulate it or to make decisions about it yeah I think that there's risk of that in almost
every profession um Financial it I mentioned the speed and and and the breadth of the information is is unmatchable by humans there are some
aspects such as if you're venture capitalist you want to uh re you want to rely on your instinct about judging the
person on their micro Expressions Etc sincerity Etc perhaps and then with you know uh corporate reorganization and m& you have to understand personalities and
read them more deeply but really other than the human human interaction uh all the information based uh approaches uh we cannot do better than uh Ai and um
another profession I often refer to which is much harder for AI to do but eventually we'll get there is medicine because AI doctors will begin to be
better than average doctors and s will start to use them as a possible reference and then AI keeps getting better then the doctor loses confidence
trusts more becomes a rubber stamp even though we place the doctor the responsibility and the and the and the consequences of malpractice is on the doctor but when the doctor sees how fast
AI is improving at some point it's inevitable that human um uh basically shame that it's so much less accurate
than the doctor it will than the AI then the doctor will just rubber stamp decisions and move towards AI making decisions even though we started on a slippery slope where we said the doctor
has to make all the decisions but AI improves much faster and people will feel um much less intelligent in comparison yeah we already see the
slippery slope in the military I've been watching very carefully the use of AI you know science fiction in Hollywood we were conditioned to expect that in war
AI will appear as Killer Robots that the AIS will actually be the soldiers shooting and fighting and this is still not happening instead the AI is entering
the military Upstream from the soldiers in making the decisions the critical decisions about what are the targets what to bomb what to
shoot and for the same reasons that to select a military Target you need to analyze immense amounts of information
very very quickly so uh AI has sped up by an orders of magnitude the process of of selecting targets but this of course
creates a very very big ethical dilemma if an AI used by the military tells you that a certain building is the
headquarters of a terrorist organization or the headquarters of an of an of an enemy unit do you just believe the AI
and bomb it or do you get humans to verify that this is indeed correct maybe the AI made a mistake maybe this is a civilian house maybe there is an enemy
headquarters but there are also civilians and they will get hurt so what is your choice now at present I'm not sure I I I I talk with a lot of people
they tell different things some people say that the Army is just bombing whatever the AI tells it to bomb other people say no no no no no it's not like this we have very strict ethical
standards we first get human analysts to verify that this is a legitimate Target and maybe sometimes it's like this
sometimes it's like that but uh in this war the AI is mainly on one side but in a future War maybe just in a year or two
once you have AIS on both sides the space for making these ethical decisions will shrink because there is no time if
both sides have ai's choosing targets and one side just believes the AI and goes ahead and bomm it and the other side says no no no no no we have to
verify with humans humans are very very slow compared to the AI by the time maybe days maybe hours by the time the
humans verified that the AI did not make a mistake maybe maybe the target moved maybe they already bomb us so there will be increasing pressure on the military
just do whatever the AI tells you to do yeah I think this idea of adding human in the loop to make it safe is generally not scalable in many other problems is
the same thing um adding a human in the loop can make the process less efficient can cause you in the financial markets to not make as much money or lose money
and the military as the case you describe so I think the human tendency was say let's just add a human in the loop then we feel good and initially
people will do it but at some point when it's competitive um I think it's going to be a game of chicken someone's gonna say Well it may be wrong to let AI
decide but if I lose the war if or if I go bankrupt that's even worse so I'm going to remove that human I think we need other um basically safety checks and guard
rails such as what what could be the solution I mean again if the easy solution of let's just keep humans in the loop this is not possible in many
situations from Finance to to war because of of the of the element of of speed so what other guard rails are
there well sometimes the technology guard rail the best guard rail is another technology like when electricity was being used it
electrocuted people then people invented um basically the circuit breaker and when internet created viruses when they connected to PC antivirus software this
time is much much harder but I i' like to think that if we as technologists encourage more people to work on guardrail Technology Solutions
not just building smarter AI one might emerge and the other of course is regulations um but I think it's hard to
regulate the technology that's not well known and um if to the extent we can take look at both paths uh to do what we can with technology guard rails and to
to research it to to fund it to encourage smart people to do it but also not be afraid to apply regulation by leveraging what we already know so the
consequences of um fraudulent trade the consequences of deep fake of consequences of um manslaughter uh should be applied
whether there's an AI doing it or a human in it doing it and arguably they might even be stronger if a human use AI to do it at scale um but I would say
regulation wise I personally would feel more comfortable to use existing laws and extend the AI rather than just applying the kinds of laws that are
coming out people are just guessing well let's regulate large language models let's let's um freeze um development of
AI Technologies those things are kind of arbitrary unproven and very hard to enforce but you know the talk of
Regulation especially we are talking just to three days after Donald Trump was elected president of the US uh on a ticket largely of
deregulation and he was strongly supported by Elon Musk and I think that all fantasies that uh the United States
would impose regulations on the development of AI and all fantasies of some kind of global agreement between
many governments on the regulation of AI now seems completely unrealistic I mean even before the election it was very
hard to see how the US and China and the European Union and Russia could together agree on a framework to regulate the
most explosive technology of the 21st century now it it seems almost impossible um so what are your views you
know from Beijing two three days after the results of of the US election on the AI arms race and the chances for any
kind of global agreement or regulation yeah I think um reaching an agreement on uh regulating technology in
general is hard to reach agreement even within the country not to mention between countries so again I would kind of fall back on saying okay if AI used
by human commits a certain crime or um is not is it needs to be fined for Mis demeanor
then that must be extended but I I have a hard time um understand thinking and being optimistic that a global Accord
can be reached and um I actually understand what you mean by the election but I think it was uh very difficult um there there are people who are talking
about it but I think you know the um open AI struggle is a great example
right Sam Alman is pro progress pro- AGI Pro speed and Ilia and a number of other
executives are pro safety and um I would say the company that's most likely to build a scary near AGI is probably open
AI before there was a checks and balances between the Pro speed and the pro um safety camps now that the pro
safety people have left um open AI probably will run faster and may or may not but hopefully not uh be be reckless
in doing so so we have a difficulty in checking these powers and then we have a potentially accelerated uh market leader so that's um challenge it seems almost
like a kind of evolutionary logic or evolutionary race that if you are more concerned about safety and you
decide to I will slow down I will invest more in building God rails instead of just moving as fast as possible towards AGI then you are left behind and if a
competitor is just more ruthless and doesn't care at all about safety issues and God Wells they just move ahead and then they they so the race is between the ruthless
yeah I'll tell you what you're asking about China I'll tell you what U we do at01 and I think a number of Chinese companies followed this direction because the Chinese AI companies have
always been strong in execution Less on grandure grandio Vision like AGI and also Chinese companies raised a lot less money than the American companies though
there's no realistic hope to get more gpus go for scaling law be open AI um but it's more possible to follow um open
AI come up with some micro Innovations and build super fast uh models that can be fitted into consumer and business
applications so I think the danger of a um uh AGI run wild is is lower in China or at least behind the speed at which
will happen in us not because um of other reasons other than just um practical Chinese mentality building useful products and not having as much
money to get all those gpus but um one issue of course is that if indeed
somebody anywhere in the world realizes the kind of fantasy or nightmare of AGI it it will impact the entire world
it's such a powerful technology and because of that it will become very soon a governmental issue so governments of other countries including China they
will just be incentivized to pour immense governmental resources to stay in the race or or or to win the race and
um again the the almost everybody I talk with they agree that it's a bad idea for humans to create something more
intelligent than us that we don't know how to control it just sounds such a stupid idea it's suicidal at the same time everybody says
at least some people say that we have to do it because if we don't do it they do it and then they control the world so it's better if we do it first and there
is a chance that we will know how to constrain this thing and you know the big Paradox of the whole situation is
that they think they'll be able to try trust their AIS but they are not able to trust the other
humans so we have this very paradoxical situation when humans can't trust other humans but they think they will be able
to trust these super intelligent AIS which are an alien intelligence like an alien species and again they don't know how
they will be able to control it or to trust it it's kind of an of of a wishful thinking that okay as we develop this
AGI we will find ways to control it we will find like again with the electricity example that okay people get electrocuted so we we devise some
mechanical means to to to to protect them but they don't have the but when you ask them okay what will it be like what kind of mechanism what kind of
Technology do you have in mind mind that will be able to keep the super intelligent AIS aligned and other control they don't know what to say they
just say we'll find something and this is extremely frightening the other frightening thing
is is that you know as a historian I tend to think that the big danger with new
technologies is not the end result it is the way there it's the way not the destination because when a powerful new technology
emerges nobody knows how to integrate it into society in politics and how to use it wisely and people experiment and some
experiments go terribly terribly wrong if we look at the Industrial Revolution like when I go to Silicon Valley and I talk with people there about AI they tell me look when the Industrial
Revolution started and people invented the steam engine and trains and telegraphs many people were afraid that this will cause immense damage but look
it created a much better world the world in the early 21st century people have much better lives than in the early 19th century and this is true if you look
just at the starting point and at the end point you see a straight line going up but in reality in history hisory there was no straight line there was a
roller coaster with terrible disasters because when the first countries industrialized in the 19th century first Britain and then France and us and
Japan they thought that the only way to to build an industrial society was to build an Empire they thought that to
build an industrial society you need control of raw materials and markets so you must have an Empire and every
industrial nation from Tiny Belgium to Japan to Russia they went to build an Empire with terrible consequences for hundreds of millions of people around
the world then another idea people had that the only way to build an industrial society was build a totalitarian regime
like Nazi Germany because the argument again it was a logical argument they thought the explosive power of Industry
so much creative and destructive power can only be managed by a totalitarian regime that controls all aspects of
life and this led to the horrors of Nazi Germany to the world wars now we look back and we know this was a
mistake there were better ways to build an industrial society but the actual uh uh uh progress of History was this roller
coaster and not a straight line now with AI uh we don't know how to build an AI Society we have no model in history now
even if ultimately we can find ways to make the AI benefit Humanity my fear is the wther that people will again experiment
maybe with Empires maybe with new totalitarian regimes that even if in the end a solution will be found on the way hundreds of millions of people could
suffer terribly yeah I just back on the AI dangers I'm wondering which one are you most concerned about near term and long term one is human using AI to do
terrible things to other humans second is AI not intentionally but causes harm because it's War function was improperly
uh devised so so it inadvertently hurts people and third is AI becoming danger
in and of itself and has the desire to do bad things um like the science fiction movies the third one is is is the least
concern I agree I I think the first two ones are very big concerns and they are already already a present day
concern and as AI becomes more powerful it will be a bigger and bigger concern it's the most obvious if what if human Bad actors use AI to do bad things this
is obvious if terrorists use AI to create a new pandemic this is obviously very dangerous I I therefore Place more emphasis in at least in public talks on
the second danger of AI not because it's evil or it wants to take over the world just because it's an alien intelligence which is very difficult to
control could produce a lot of unintended consequences which we did not see we have already one big example with social
media that the social media platforms are run are controlled by algorithms the algorithms were given a a
specific goal by the companies which is to increase user engagement to increase traffic to cause people to spend more
time on the platform um which sounded like a a good goal it's good for the companies and even good for the for the people because engagement sounds like a
good thing if I show people things that interest them and they spend more time on my platform it is a good thing no but what happened is that the algorithms in
charge of social media that decide what is the next next video you will see on Tik Tok What is the next post you will see on Facebook they discovered how to
hack the human mind that if the goal is simply to engage people the easiest way is to press the hate button or the anger
button or the fear button in the mind because these things immediately capture people's attention for evolutionary reasons you know when we lived in the
African Savannah so you go around the Savannah you go around the forest you see all kinds of things you see trees you see clouds you see
whatever if suddenly there is a snake all your attention goes there because to survive you need to privilege the danger
to you what frightens you so you go for hours in the savannah and then you see a snake all attention on the snake but on social media you see a snake
every moment when when you scroll through your feed snake snake snake snake because we are programmed if there is a snake all
attention on the snake it made sense in the African Savannah when you encounter a snake once in every couple of days but now on social media the algorithms
hacked our operating system and they discovered if you want to keep the person all the time just scrolling just on the platform you need to show them things that frighten them that make them
angry or that make them very greedy and this is destructive to mental health and to the social health as well and this
was not I mean nobody in the companies at first understood that this is what the algorithms will do and the algorithms are not evil they did not try
to harm people they were given a go make people spend longer times on the platform and by trial and error they
discovered how to hack the human mind but the results are very very bad for human society and this can happen on a larger and larger scale we started by
talking about AI in the financial system so you can tell AI okay just go to the market and make money you give the AI an initial sum of money a million dollars
and access to the to the markets and just one goal make money and the AIS start inventing new strategies of
investment maybe new Financial devices like in the game of Go they become more creative than humans for a couple of years there is a boom in the market
there is all these new AI made Financial devices that humans don't understand that we don't know how to regulate because it's beyond the human capacity
mathematically it's beyond our capacity and then one day there is a big crash like in 2008 and no human
understands what is happening because we have lost control of the financial markets so this could be a huge catastrophe with a lot of social and
political consequences which is caused by quite narrow AIS you don't need AIS that control the whole world they just play
in the financial markets and create a huge financial crunch so these are the type of of dangers we think I I think that we need
to be very very aware of um in the long term AI can even start developing their own goals I mean if you think how humans
develop our goals where do we get our goals in life uh sometimes they are top down like somebody gives us a goal like you work in a company and the CEO tells
you this is your goal but most of the time in life it's bottom up humans you can think about us as kind of algorithms
with a very basic reward mechanism pain and pleasure everything we experience in life in the end it's just a sensation in
the body if it is a Pleasant sensation we want more if it is an unpleasant sensation we want less we want this to
disappear and from these very small mental atoms the entire human world is built if you think why do people go to
Wars why do why do people do anything ultimately it's about these pain and pleasure pain and pleasure and if we can
simply uh um engineer a reward system in an AI which is analogous to pain pleasure pain
pleasure then everything else will be built from that and the AIS could even start entire Wars just to experience
this AI pleasure now it sounds ridiculous why would AI start a war to just feel this kind of reward mechanism
but this is exactly what humans are doing imately it's these human systems pursuing pleasure avoiding pain and
making this very complex calculation that if we want to experience pleasure and avoid pain we must start a war right well so I I'm very glad you
said the first two are the big problems because I very much shared that and I think a lot of the audience might first assume AI danger is from the science
fiction so that kind of Terminator kind of um danger is not the one we're worried about it's not impossible in the very long term but we're more focused on
the other two and I think the I I like all of your examples but the best example I think is the social network it's close enough to all of us um and um
and it is in initially what seems innocuous getting people to see what they like and getting um getting the uh the social
networks more traffic more money and seems like a win-win and what was forgotten was what triggers people's
likes and watches and time and is really um most mostly the greatest element is is is hate and fear and and had the designer been aware of that maybe
something would have been differently but we're now pretty far down down the road yeah so I I think um and then now I
think for a lot of the social networks the the decision to to change is very difficult because they're making so much
money from it the decision to change will anger their shareholders employees and how do you become more responsible
so just and then you know a lot of the social networks use basic Ai and when new AI comes along um the danger is that
it's not just sending you things that will get you to watch longer click more and um um and and engage more but also
AI will start writing things and showing you uh fabricated uh AI created uh pictures images audio um deep
potentially deep fake and texts so it's so people in this AI enable social world uh May encounter even more enticing
things whether they trigger uh hate or fear or greed or I'm sure there's also you know entertainment and happiness but
but but AI will do a much better job targeting people's uh uh interests and as well as vulnerabilities is that something you're
extremely worried about I imagine yes yeah I mean as you said I mean previously the AIS were simply
editing uh or deciding about content created by human beings all the videos all the texts it was created by human beings and the AI just decided what to
show you the next stage the AIS will create the videos the texts the images and again this is something completely new in history even before we start to
be afraid or or happy about it just to understand the magnitude of the change for tens of thousands of years we lived
inside a cultural cocon a cultural sphere which was the product of the human mind every image every painting
every statue every musical piece every story every poem Every physical tool it was the creation of the human mind so
sometimes we could encounter uh uh different cultures but they were always human cultures like when Chinese culture and Western culture
encounter so suddenly you are exposed to new music to new images to new ideas but they are still human
ideas now we will increasingly encounter an alien culture alien not in the sense of coming from other space of course alien in the sense that AI is alien
intelligence it's not human it's not organic its imagination is not shaped or limited by the forces of
evolution so it could start create completely new music and out and out effects it could have good consequences
but also bad consequences but it will be very different from anything we have seen before and it's a big question whether
we are even equipped adapted to uh uh to live in such an alien or
hybrid culture and what makes it even more scary in a way is that the AIS
cannot just create passive content they can interact with us previously we could only interact with other humans or maybe
other animals like if you have a dog or a cat and online if you interact with someone then maybe for a minute a bot
can cheat you into thinking that this is a human but if you talk with somebody for 10 minutes or one hour you know whether it's a human or a bot but the new generation of
AIS could become better than humans even at interactions at relationships and again this has some positive potential in fields like
medicine for doctors for therapy so you have an AI doctor that can talk with you and be very empathic and so forth but uh
there are also very very dangerous developments if AIS learn how to imitate emotions and how to create fake
relationships and what happens if people have more and more relationships with AIS instead of with other human beings
yeah I I very much think through the books that I have written that um AI will improve rapidly uh when I wrote my books I didn't anticipate it would
improve this quickly in the last two years and uh but the hope I had uh in writing my books was that no matter how
fast AI improves there's something innate to us being human beings that no matter how much a I can uh pretend to be
us can talk like us write like us fundamentally people over our um uh you know Millennia of um
Evolution have our DNA requires People to People connection so even if AI fakes it especially if it fakes in the virtual world because in the real world we still
earn to uh meet people talk to people love people we feel warm and this is not well understood you know what part of our brain or heart does
that but um but it it is something that's important to us and I have hoped and I still hope that's something we can hold on to as one of the things that we
can do and AI cannot do so I'm just wondering as a historian um one of the world's best historians how you feel about this part
of our evolution is that something you think we can hold on to that AI can imitate all he wants but people want that human to human um mostly physical
presence type of trust connection and love and can that be something we can migrate towards to make our lives um
more meaningful even though AI is doing a lot of our jobs and and um the that that our meaning of existence is not as
much our job still important but not as much but more important the connections we have with our family and friends how
how do you feel about that as something we can hold on to no matter how fast gen improves yeah I think this is extremely
important and the real question is what do we want from relationships if people are very
self-centered like I want to be in a relationship so that another person can provide my needs my physical needs my
romantic needs my social needs then ultimately AI can maybe provide most of our needs better than any human being but a relationship which is just
based on I want somebody to provide my needs is not a real real relationship it is solipsistic it is extremely egotistical a real
relationship is about I care about the feelings of the other entity of the other person now AIS don't have any
feelings they can imitate feelings they can understand my feelings but they don't have any feelings um there is this big confusion
between intelligence and Consciousness that I think we've talked about in in previous meetings that you know intelligence is the ability to solve problems and attain goals like you have
a goal to win a game of goal and you solve all the problems on the way and you win the game this is intelligence and AI in some Fields is more
intelligent than us Consciousness is the ability to feel things like pain and pleasure and love and hate if you lose the game you are sad if you win you're
happy AI as far as we know has no feelings and therefore it cannot suffer uh Consciousness you can really
think about it as the capacity to suffer but also to be liberated from suffering and real human relationships is caring that somebody
else is suffering and helping them be liberated you cannot have these relationships with
AIS at least not now the big problem will come I think in a few years years as AI become better and better at
imitating feelings at imitating emotions because they are strong commercial incentives to create AIS that imitate
emotions and that fool people into thinking that the AI is conscious is sentient has feelings and I think a lot
of people will believe that and of course we know this is a very ancient philosophical problem there is no way to prove the existence of
Consciousness I know that I feel things because I feel the myself I cannot prove that anybody else in the world has feelings and they are not zombies I
cannot prove that animals have feelings I will not be able to prove or disprove that AIS have feelings and they
will be very good at imitating feelings so how do we know we can't and a lot of people will insist that AIS are also
conscious and they have feelings and therefore they should be treated by the law by countries as persons with rights and I think this will become a major
political issue and maybe some countries recognize AIS as persons with rights and other countri says no they are not persons they don't have rights
and the political tension between countries that recognize AIS as persons and countries that don't recognize would be even bigger than tensions today
around issues like human rights I sure hope we don't start giving too many recognition of AI as humans
whether it's citizenship or recognition uh for one thing I really do agree with you that Consciousness sets us apart even though it's hard to measure but we
did build the AI it's a black box in some sense but it definitely doesn't have Consciousness build in and humanness should be more and more defined as Consciousness not as
intelligence because AI clearly has that if it's merely intelligence in five years we're gonna have to give all the AI um citizenship that can't can't be
right yeah another problem I often think about is you know facing um you talked a lot about dangers of AI but
another aspect really is AI is becoming so capable that people will start to lose jobs um some people start to lose
jobs some people have part of the job done by AI uh the optimistic will view it as liberating myself to do things I'm good at I like to do but the pessimistic
will view it as while AI is improving so much faster than me I'll be painted to a corner so so one hope I have is that
people will gradually realize that life is not just about doing work especially not repetitive work and then that AI
might be a forcing function for people to redefine themselves and find the human Destiny but I don't have a road
map of how to get there and as a historian do you feel optimistic all that in the short time frame we have that we make such a large transformative
change to the humanness in this world I'm there is something optimistic and something pessimistic to say the
optimistic part is that I think we will increasingly realize that the important thing about us is not intelligence it's Consciousness so far we have been the
most intelligent animals on the planet so we took great pride in our intelligence and we focused on the development of our
intelligence now because AI is becoming more intelligent than us it forces us to rethink the very meaning of Being Human and I think the right conclusion is that
okay AI will be more intelligent but the really important thing is consciousness and therefore we should invest more in developing our consciousness
you know I personally I spend two hours every day in meditation I go for a long retreat every year of 30 days to 60 days of meditation to develop to explore and
develop my Consciousness not my intelligence and I hope that in different ways people will invest more
in in that so and and there is an enormous room for advancement and development of
Consciousness and if we make this investment I think we will be okay that okay let the AI take care of many of them of the jobs if we develop our
Consciousness it will become a much better world the big problem I see is that to be free to develop your
Consciousness and your relationships and your communities and so forth you first of all need your basic need needs to be
taken care of now if the AIS are taking more and more of the jobs what will provide your basic needs in very rich
countries you can say okay the government will uh uh put taxes on the big AI companies and use that to provide
basic services and income uh to to everybody but if you look at the whole world that's the problem that what happens to the countries which
are left behind so let's say the us or China they are ahead in the AI race even if many of their citizens lose their
jobs the government has enormous new resources to take care of them but what happens in Indonesia what happens in
Nigeria what happens in Argentina that are left behind I don't see scenario that the US government under a leader like Trump would take
taxes from the US and say okay I'm now giving it to the people who lost their job in Nigeria to provide them with basic services this this is very
unrealistic so if we can this is again if we can solve this problem I think that by investing more and more in human consciousness we can
create a much much better world even without even after losing many jobs but getting there is going to be very very hard and
dangerous I think right now there's just not enough understanding and and awareness of how powerful AI is and to
to Really reach a good conclusion people have to understand how big the opportunity and how big the challenge is to begin with um secondly I think we
should really try to encourage people to think about the human part as they develop AI applications that AI exists
to help humans not to just make money not just to become another species even though these things will come to pass
and also I want to encourage the uh people going into AI to really think about can they work on something
that will make AI safe that will make humans better as a result not just think about how do I build a bigger model how do I make AI smarter how do I beat the
other person and I would encourage the businesses that um the harness and control AI uh to spend more time and
energy thinking about their social responsibility because with greater power comes greater responsibility and the people control the AI the companies
who control the AI now have um basically the biggest responsibility to the world uh to the human species and I hope
people will uh put aside their ego and their greed and at least spend part of the time thinking about AI for good and
how to ensure that AI is exists to complement and help people not to create unsolvable problems
um it's a very similar message to what Dr Lee just said that uh AI is the most important and Powerful technology we ever created it has enormous positive
potential but also great negative potential we still have the power to decide which way it will go but for that more people need to understand what is
happening it is not good if maybe the most important decisions in the history of our species are taken by a very small
number of people in just two or three places which don't really represent our species so we need simply more people to
understand what is AI what is it capable of doing what are the opportunities and dangers and join the conversation this is why I wrote the book in order not to
predict the future I can't predict the future I paint various scenarios but the main messages is that with all the different scenarios we are still in
control for a few more years and we need to make wise decisions and if more people understand the situation I think we'll make better decisions and and
wiser decisions uh that's the main message and I hope very much that indeed more people will join the conversation and it's especially important to keep the
conversation Global so Americans and Chinese and Europeans and Africans can can all also talk to each other I hope very much to come to China
in person physically in 2025 and I hope to meet Dr Lee and to meet other other uh thought leaders there um and until then we'll we'll keep
the conversation going uh it's been a real pleasure talking to you again Professor Harari I look forward to seeing you in China next year thank you so much Dr Lee was a very
profound conversation and and thank you for sharing your your thoughts and your feeling and yes I hope to to meet soon in China
Loading video analysis...