LongCut logo

人类的最后堡垒:情感能否对抗AI?

By 罗振宇-罗辑思维 - Luogic TalkShow

Summary

Topics Covered

  • Multiple Perspectives Unlock Deep Understanding
  • Books Forge Reality Maps
  • Stories Enable Mass Human Cooperation
  • Intelligence Solves, Consciousness Feels
  • AI Agents Invent Beyond Humans

Full Transcript

Fore!

Foreign! Foreign!

Where were you flying?

believe.

Fore speech.

[Laughter] [Music] Oh, thank you. Yeah, I I try to give different perspectives on the same issue. So, sometimes it's human

issue. So, sometimes it's human perspective, sometime it's animal perspective. Like even this book, even

perspective. Like even this book, even though it's about AI, Nexus, it has a pigeon on on on the cover, an animal.

Sometimes it's the viewpoint maybe of gods or or future entities.

Um because to really understand a subject deeply, I think we need to look at it from different perspectives. And

humans, we are often so caught up in our own view of the world um that it makes it difficult for us to to really gain a deep understanding.

Fore [Music] speech.

[Music] I'm not sure about the physical format.

You know, even I myself these days I listen to books to audio books more than reading physical books because I have more time uh to listen when I'm cooking

or when I'm going for a for a walk. I I

listen to books. Um I don't think that the format that the format the physical form is is so important but the the idea

of telling a long story or of having an argument being developed over hundreds of hundreds of pages and not only in a few

minutes. This is important and this I

minutes. This is important and this I don't think it will change because it addresses something that the human mind that the human brain craves.

What we really want is not just to be bombarded by short pieces of information like I don't know Tik Tok videos. What

we really need is a big picture of the world. Okay, I have all these small

world. Okay, I have all these small pieces of information. How do I glue them together, stick them together into

a big picture of my life and of the world in in which I'm living? Uh, you

know, think about it like a map of reality. And no matter who you are and

reality. And no matter who you are and where you live, you need a map of reality. And you cannot get it from just

reality. And you cannot get it from just any from just a a large number of small bits. You need to piece them together.

bits. You need to piece them together.

And this is what books allow. And I

think that even if their form changes, the essence will still be relevant even in 20 years or 200 years.

Maybe um it could but at least in history you cannot avoid the the the long narrative that you you cannot create an

equation of history like in physics. In

physics, the ideal is to kind of reduce the whole universe to a short equation like E= MC².

In history, it works according to a different logic. The logic of history is

different logic. The logic of history is is narrative. Is the development is is

is narrative. Is the development is is how a lot of different events influence each other and create um political or

economic process and there is no equation that can describe it briefly.

So I I find it hard to imagine how a history book can be condensed to an equation or to a single page. If you

think about uh uh uh reality and and it goes back to the issue of of perspectives we talked about earlier. Um

reality if you want to describe it you have to look at so many different details because reality has no center.

If you think about it as a movie let's say that you're making a movie. Every

movie has a frame. You point the camera at one thing which is in the center. You

see a few things on the side. Most

things you don't see.

But in reality there is no such thing.

There is no camera of reality which tells you this is the center of reality.

These are the important people. These

people are less important and these people can be ignored completely.

So really describing reality is an almost endless mission with always more details that you need to take into account.

I would say experiment if I mean different people not just children think in different way. If somebody can absorb

knowledge and wisdom more easily from documentaries or from movies, then why not? I I don't think we should be wedded

not? I I don't think we should be wedded to the specific format of a book. One

thing which I I I still think is is an advantage of a book is that the way we read like we read something and then we think about it and then we read some more and think about it. We can stop. in

a documentary or in a movie in general, you can't stop. You just kind of let the movie wash over you. So, in a way, there

is less room for exercising your imagination, for exercising your uh uh your mind, your brain.

But but still in in general, the the key thing is to find what works best for different people and then to encourage that.

The next one, well, I'm actually working on a book series for children. It's

called Unstoppable Us. And it really takes the idea of both sapiens and nexus and uh uh uh conveys them or tells them

to children aged between say 8 and 12.

So it tells the history of humanity from the stone age until today and also covers the present day scientific developments and the AI revolution.

So, three three volumes in the series are already published and I'm now working on the fourth and last volume which will be published next year.

Well, the book tells the the long-term history of humanity over tens of thousands of years from the time that we were of just insignificant apes animals

living in Africa until we became the dominant specy of the planet. planet and

today we can you know fly to the moon and split the atom and write DNA and create AI. So the the the main question

create AI. So the the the main question of of sapiens is how did this insignificant animal take over the

entire world? And this story remains

entire world? And this story remains relevant today as much it as it was uh uh 10 or 15 years ago. And really to

understand not just you know the big political and economic developments but to understand our own personal lives I

think that we need to know the deep the long-term history of our species. Even

if you think about something like the food that we eat. A lot of people have issues with food like they eat uh uh too

much sweet and fat things and they think maybe they need to go on a diet and we sometimes don't understand why do we have the urge to eat things that are not

good for our bodies? Is something wrong with us that makes us like things that are not good for us? It it sounds very strange. And to understand this, you

strange. And to understand this, you need to understand the long-term history of humanity. A 100,000 years ago, more

of humanity. A 100,000 years ago, more or less, when our ancestors lived in wild forests, food was scarce. So if

people went around the forest and found a tree full of ripened fruit, something sweet, the sensible thing to do was to eat as many of these sweet fruits as

possible because this was a valuable source of of energy. And if you ate just two or three fruits and then went away and said, I will come tomorrow and eat

some more. By the time you come back

some more. By the time you come back next day, maybe all the fruits are gone because some monkeys have found the the the tree and ate all the fruits. So we

were programmed by evolution, by our living conditions in the distant past that if we encounter something sweet,

eat as much of it as quickly as possible. Now this program is still

possible. Now this program is still inside our bodies, still inside our brains also today. So when you go to

your refrigerator and open it and find a chocolate cake inside, your body doesn't know that you now live in a big modern

city with supermarkets and refrigerators. The body still thinks

refrigerators. The body still thinks that it is in the ancient forest. So you

react to the chocolate cake the same way that your great great great ancestors reacted to the fruit tree. And so to

understand how we eat today, we need to know the deep history of our species.

And to give another example, many young children, they sometimes wake up in the middle of the night afraid that there is a monster under the bed and call their

parents in, you know, in fear. And this

too is is strange. What is happening?

This is also a memory from millions of years ago when humans lived in the wild and there were actually monsters that

came to eat young children in the middle of the night. A lion would come or a bear would come and if the child got up and screamed and called their parents,

they would be saved. If they continued to sleep, they were eaten. They died. So

even these small daily incidents that we know from our life today to understand them we need to know the long-term

history of our species which is what sapiens is is all about.

The biggest takeaway is that we control the world because we can cooperate in very large numbers and we humans can

cooperate in very large numbers because we can tell stories. So the most our most important ability is not our

intelligence, it's our imagination. It's

the ability to tell and believe stories.

Uh other animals like chimpanzees or elephants, they can cooperate only in small numbers like 50 chimpanzees can cooperate because cooperation among

chimpanzees relies on personal acquaintance. If we two are chimpanzees

acquaintance. If we two are chimpanzees and we want to cooperate on something, we need we need to know each other personally.

Two chimpanzees who never met before cannot cooperate.

Humans are much stronger than chimpanzees, not because we are physically stronger and not because we are more intelligent

but because we can cooperate in much larger numbers. uh but also economic

larger numbers. uh but also economic structures work in the same way. Money

is probably the most successful story ever told. We don't usually think about

ever told. We don't usually think about money as a story, but this is what it is. If you look for instance at a at a

is. If you look for instance at a at a at a a a bill, a currency note, it's very strange because this thing, what is it? It has no objective value. You

it? It has no objective value. You

cannot eat it. You cannot drink it. Why

does it have any value? It has value because of the stories that we tell about it. There is a story told by

about it. There is a story told by bankers and finance ministers that this worthless piece of paper is actually

very valuable. Like this piece of paper

very valuable. Like this piece of paper is worth one banana.

And if everybody believes it, it works.

I can take this worthless piece of paper, go to the market, meet a stranger, a person I've never met before in my life, give him this piece of

paper, and in exchange get a banana that I can eat. Chimpanzees can't do that. If

you try to give a a a a dollar note to a chimpanzeee and expect the chimpanzeee to give you a banana, it will never work.

This is why chimpanzees don't have trade networks but humans have trade networks.

So this is the most important message of sapiens that we control the world because we can invent and believe

stories like the story of money.

Um we are surrounded by new stories. For

the first time in history though some of these stories are invented not by human beings but by something else. Previously

in history, all the stories that shaped our lives came out of the human imagination. Not just children's fairy

imagination. Not just children's fairy tales and and poems and mythologies, but also economic stories like money and economic theories, also political

stories like ideologies. All of them came from the human imagination, from the human mind. Now there is something other than a human mind that can invent

stories and this is AI.

So for the first time in history we begin to live inside a culture an environment that is shaped by the

stories of a nonhuman intelligence and nobody knows what the consequences will be. You know it's like encountering an

be. You know it's like encountering an completely alien culture. If you say leave China and and go to visit another country like United States, you will

encounter a different culture. But

American culture also comes from the imagination of human beings.

AI can create a culture which is far more different than anything that exists today on earth.

Um maybe you've heard about there was this famous at the beginning of the AI revolution about 10 years ago in 2016 there was this famous game between the

AI Alph Go and the human go champion Lisa do and this was considered one of the uh uh uh founding events of the AI

revolution because Alph Go defeated the human champion Lisa Do. But the

interesting thing was not just that an AI defeated a human champion. It was the way it defeated him. It invented a

completely new way to play golf.

You know, humans have been playing golf for thousands of years since it was invented in ancient China. And after

thousands of years, we thought that we explored all the different ways to play go. But then AlphaGo came and showed us

go. But then AlphaGo came and showed us that this was not so that the human imagination is actually limited by the

limitations of the human brain. So over

thousands of years we have explored only a small area of all the possibilities of playing golf and AI came and exposed and

and and invented completely new ways to do it. Now, this is likely to happen in

do it. Now, this is likely to happen in more and more fields as AI creativity changes not just the way we play games,

but also the way we write books and the financial system and uh all areas of of art and culture and even politics.

AI will invent new kinds of poetry. It

will invent new financial systems, new kinds of money. You know,

cryptocurrencies for instance, which are now becoming more and more important. We

now have new cryptocurrencies, new kinds of money, which are created by an AI, not by the human mind.

So what might happen if we find ourselves inside a financial system that was invented by an AI and not by a

human?

We may be like horses and cows in the old world. You know horses and cows they

old world. You know horses and cows they are part of the economic system but they do not understand it. Like if I have a horse, I can sell you that horse in

exchange for money. The host can see that something is happening. In the

morning I belonged to this person. Now I

belong to that person and they exchange something something piece of paper but the host doesn't understand what is happening because the host doesn't

understand money. This is a story which

understand money. This is a story which horses can't understand.

Now what happens to humans if now AIs invent new kinds of money and our lives are shaped by financial transactions

that we don't understand. Maybe one

company fires us and another company hires us because of decisions made by AIS and we don't understand why or we

apply to a bank to get a loan and an AI decides whether to give you a loan or not and when you ask the bank why did

you refuse to give me a loan the bank says we don't know we asked our AI the AI said don't give him a loan and we refused the loan. But we have no idea

why the AI said no.

Exactly.

once.

Yeah, that is possible. I mean the danger is that humans lose the ability to cooperate effectively while AIs are

constructing their own networks of cooperation and then the AIs will organize everything and rule the world while we will no longer be able to

understand our lives and the world in which we live. That is a a very big danger.

I I think that again it's still a very big danger that it's not a prophecy.

It's not inevitable. We should do our best to prevent this situation of extreme inequality.

But if we don't make wise decisions, the danger is that a very small number of people will enjoy the enormous wealth

and and power generated by the new technologies like AI and biotechnology.

Whereas most people would lose all the power and wealth and perhaps even become this kind of new useless class because AI can replace them in more and more

jobs and in more and more tasks.

You know even the AIs of today they increasingly replace us and AI is still at the very early stage of its evolution. AI is about like 10 years

evolution. AI is about like 10 years old. It's still a baby. It's still a

old. It's still a baby. It's still a child. and it will continue to develop

child. and it will continue to develop for hundreds and even thousands of years. So the AIS of today like deep sea

years. So the AIS of today like deep sea and chedd and all that they are extremely primitive. If you think about

extremely primitive. If you think about a a biological evolution, the evolution of animals, they are like the amiebas

like evolution of animals began with microorganisms like amiebas and it took billions of years of evolution to get to

dinosaurs and mammals and finally humans.

The evolution of AI is millions of times faster than the evolution of biological organic beings. So now we are at the

organic beings. So now we are at the stage of the amiebas but within just a few decades we might encounter the AI

dinosaurs of the AI mammals. Now, if

Deep Seek is just an amoeba, try to imagine what an AI Tyrannosaurus Rex might look like in just 20 or 30 years.

So, compared to its abilities, maybe very few people will be needed in in in the economy. People think, well, this is

the economy. People think, well, this is the age of computers. I should learn to code. This will ensure that I will have

code. This will ensure that I will have a good job in 20 years. But maybe in 20 years or even in five years AI codes so

much better than humans that you don't need any human coders or very few human coders. Even today more and more coding

coders. Even today more and more coding tasks are already being done by AI.

So the danger of this kind of split is even greater today than when I wrote about it initially in in Homodos back in 2016.

Fore [Music] spechch.

Exactly. I I think this is a horrifying scenario and the reason that I wrote Homodos and later Nexus is in order to warn people against this dangerous

scenario because we can still prevent it. At present we are still in control

it. At present we are still in control and if we understand what the dangerous scenarios are, we can take decisions, we

can take actions today to prevent the most dangerous scenarios. Now I think that maybe the most important thing to understand which is maybe the most

important message of homodos is that we need to clearly distinguish between two things that people often confuse and these are intelligence and

consciousness.

Intelligence is the ability to pursue goals and to solve problems on the way to the goal. like you have a goal of

winning a game of chess and you are able to solve all the problems in order to gain victory or if you are driving a car you have a goal of let's say reaching

the train station and you have to navigate through streets full of other cars and pedestrians and so forth so you have a goal and you need to overcome

problems on the way this is intelligence and this is what AI I is now becoming better than us. AI is now able to defeat

us in chess. AI is uh uh becoming a better driver than us. But intelligence

should be distinguished from consciousness.

Whereas intelligence is the ability to uh uh pursue goals and solve problems. Consciousness is the ability to feel

things, to feel pain and pleasure, love and hate. We tend to confuse

and hate. We tend to confuse intelligence and consciousness because in humans they go together. We adopt

goals based on our feelings and we also solve problems with the help of our feelings.

AIS are completely different. At least

today, we don't know about the future, but at least today AIS have no consciousness. They don't feel anything.

consciousness. They don't feel anything.

They are highly intelligent. In many

areas, they are already more intelligent than us, like in playing chess or playing goal. Um, but they don't feel

playing goal. Um, but they don't feel anything. If they win the game, they

anything. If they win the game, they don't feel happy. They don't feel joyful. If they lose the game, they

joyful. If they lose the game, they don't feel sad. If the opponent makes some very uh h shrewd move, the AI does not become anxious or stressed. It

doesn't feel anything.

And what is ultimately important in life, I think, is not intelligence, it's consciousness.

Um certainly ethics, being a good person, behaving in a good way or having a good society is not about intelligence. It's about

consciousness. The aim of a good society is to reduce suffering in the world and to enable more people to live a happy

life, a life full of of love and joy.

Now, since AIs don't feel anything, they cannot be the aim of society. They

can help us uh uh reduce suffering. For

instance, AI doctors can help us cure diseases.

But the really important thing ultimately is feelings. And this is what makes human lives so valuable. Now we

don't know perhaps in the future AIs will also develop consciousness, develop feelings, but uh at at the present

moment they can only imitate feelings.

One of the things we observe today is that AIs start to mimic or fake emotions and feelings and humans start developing

relationships with AIs thinking that the AI feels something when actually it doesn't. How do we know if somebody else

doesn't. How do we know if somebody else feels anything? Very often we use

feels anything? Very often we use language as a means to communicate. So

if somebody says I love you, I can ask them how how does love feel? What is

love? What do you mean when you say I love you? Previously only human beings

love you? Previously only human beings could describe feelings of love. But now

AIs have mastered language to such an extent that AIs are able to give us better descriptions of love than human can.

You know, if an AI says, "I love you."

And you ask the AI, "What do you mean?

Describe to me exactly how you feel."

The AI has read maybe thousands of romantic poems and romantic books and watched romantic movies and it it has

really amazing linguistic abilities like these large language models. So AI can already

give us a better description of of love than most human beings can. But it

doesn't feel love. It only mastered the words, the language, not the feeling. So

this creates again a big confusion. A

lot of people might come to think that AI is conscious when in fact it's not.

[Music] [Music] So [Music] um again I don't believe in just stopping technological development. It

won't work and also it's undesired because technology also have enormous positive potential. Now in my books and

positive potential. Now in my books and in my talks I often focus on the dangers not because there are only dangers but because uh uh it's part of a

conversation.

No, you have the entrepreneurs and the corporations who develop the technology and they tell the public about all the positive things that the technology can

accomplish and they are often right. But

it becomes the job of historians and philosophers like me to point out to the dangers that also exist. So I don't deny

the positive potential. I just say that we need a more balanced view of the technology. And then the question

technology. And then the question becomes okay so what do we do? So there

are two things that we can do. We can do things on the individual level and we can do things on the societal level or the government level. So for instance on

the level of society we can have regulations about the technology uh for instance to forbid AIs from

pretending to be human beings.

An AI is welcome to talk with us, but an AI cannot pretend to be a human being, to be a kind of a fake human. You know,

for thousands of years, people, governments had very strong rules against fake money. The same way we now need strong rules against fake people,

against AIS that pretend to be people.

Another thing that governments and societies can do is provide the um financial and educational resources for

people to uh uh uh adapt to the new reality. What is likely to happen in the

reality. What is likely to happen in the job market is not that AI will take all the jobs and all the jobs will disappear.

AI will take some jobs but new jobs will emerge. There will be new jobs and tasks

emerge. There will be new jobs and tasks for humans. The problem will be how to

for humans. The problem will be how to retrain people to fill the new jobs.

Like you lost your oil job as a truck driver to a self-driving vehicle to an AI. There is a new job, but you need a

AI. There is a new job, but you need a period to retrain yourself. Who is going to pay for your retraining? who is going

to support you during the months or years when you're retraining and also who is going to support you you know uh

psychologically how to deal with all the stress of of this transition. So we will need governments and societies to invest

a lot of resources in helping people adapt do the transition.

This is what uh uh societies can do. And

of course, individuals also have a role to play. Um in order to adapt to a very

to play. Um in order to adapt to a very flexible world, we need to develop a more flexible mind.

Now in the past the the way that humans approached life was that as a young person you learn you mostly learn you

acquire skills you acquire knowledge and then for the rest of your life you work mainly based on the skills that you acquired in your young age. Of course,

you continue to learn many things. You

gain experience. But people assumed that they will continue to work in the same profession, maybe in the same job all

their life. This is now obsolete.

their life. This is now obsolete.

This is no longer relevant for the 21st century because the changes in the job market will accelerate more and more. So

people will have to retrain themselves again and again. And for that they will need to develop a very flexible mind that can keep changing throughout life.

So this is the a big challenge for individuals.

Yeah, I think what we just discussed about consciousness and intelligence uh because we see that AI intelligence

is is exploding and many people expect that by the end of the decade by 2030 AIs will become super intelligent

more intelligent than humans. in almost

all fields of action that they will be able not just to beat us at chess, not just to drive cars better than us, even write books like this better than us.

Um, so and then the big question becomes what about consciousness? Do AIs have consciousness or not? and understanding

this uh uh subtle relationship, this delicate relationship between intelligence and consciousness and and feelings. I think this will be the most

feelings. I think this will be the most important question in the coming years.

Well, this book as its name indicates it doesn't have a single main theme or a single main lesson. It's a collection of

a lot of different lessons that are relevant to the 20 21st century. Uh it

was published in 2018 but many of them are still very relevant today like what we just discussed the job market. So I

think the first or second chapter in 21 lesson is exactly about what will happen to the job market and what kind of

skills people will need and how people can adapt and remain relevant for the job market of the 21st century.

To be honest, I tend not to reflect on pre on books that are already published because then it becomes like an endless project. Every time I pick up one of my

project. Every time I pick up one of my previous books, I think, oh, I should have changed this and I should have changed that. So, I said, no, I it's

changed that. So, I said, no, I it's published. It's now out of my control.

published. It's now out of my control.

It has its own life. I think about a book a bit like a child of a human being that you invest a lot of effort in kind

of creating it and preparing it for the world but once it's out it's it's it will succeed or fail by itself. You

cannot keep kind of uh managing it and changing it again and again.

foreign big one.

[Music] So I think this is the big one.

And I'm a historian. So actually the first part of Nexus, it doesn't start with AI. It goes over the again

with AI. It goes over the again thousands of years of history of previous revolutions in order to understand what is really

new about AI. If you only look at the AI revolution and you don't understand the previous revolution, you don't know what is new and what is the same. So the the

book covers previous big inventions like writing or print or modern mass media, radio and television and all that to

conclude that this time it's different.

AI is much bigger. Why? Because

all previous technological inventions created new tools. But AI is not a tool.

AI is an agent.

A tool is something in our hands that we decide what to do with it. Like you

invent a knife. Back in the stone age, people invented stone knives. The knife

doesn't decide what to do with it. You

can use a knife to cut salad or you can use a knife to kill somebody. The knife

doesn't decide. You decide. Similarly, a

pen is a tool in our hands. I can use the pen to write poetry, literature, this ideas. The pen doesn't decide what

this ideas. The pen doesn't decide what I do with it. And the pen can't invent anything.

AI is different because it is not a tool. It's an agent. It makes decisions

tool. It's an agent. It makes decisions by itself. It invents new ideas by

by itself. It invents new ideas by itself. You know, there is so much hype

itself. You know, there is so much hype these days around AI that people don't really sometimes don't understand. So

maybe I give a simple example. When

OpenAI developed its new powerful AI tool two or three years ago, it wanted to test

what can this AI do. So it gave it a test to solve capture puzzles.

A capture puzzle is a visual puzzle that all of us encounter almost every day on the internet when we try to access let's say our bank account. The bank wants to

know if you're human or or maybe a bot.

So you have to solve a capture puzzle.

It's a visual puzzle. You have you see a string of twisted numbers and words or maybe a picture of a cat or a car and you have to identify correctly what you

see. Humans can solve these riddles

see. Humans can solve these riddles quite easily. But for AIs, it's still

quite easily. But for AIs, it's still difficult.

So, a open AI gave their new AI the task of solving the capture puzzle. And the

AI couldn't do it. It was too difficult.

But OpenAI, the researchers gave the AI access to the internet to a web page called TaskRabbit where you can hire

humans, people to do things for you online. And the AI tried to hire a human

online. And the AI tried to hire a human worker to solve the capture puzzle for it. Now, the human got suspicious. The

it. Now, the human got suspicious. The

human wrote on the web page, why do you need somebody to solve your capture puzzles? Are you a robot? The human

puzzles? Are you a robot? The human

asked the crucial question, are you a robot? And the AI replied, "No, I am not

robot? And the AI replied, "No, I am not a robot. I'm a human with a vision

a robot. I'm a human with a vision impairment, which is why I can't see the visual riddle clearly, and this is why I need your help." and the human was

convinced and solved the capture puzzle for the AI. Now, this very small incident shows us how AI is different from a pen or a knife or even an atom

bomb.

First of all, the AI made decision a decision by itself. It decided to lie.

Nobody told the AI to lie. They gave it a goal, solved the capture puzzle, and on the way to the goal, the AI reached a

junction when it had to make a decision.

If it told the truth, the humans will not help it and it will not reach the goal. If it told a lie, the human will

goal. If it told a lie, the human will help it and it will reach the goal. So,

the AI decided by itself to lie.

Secondly, this shows us the ability of AIS to invent new ideas because nobody told the AI what would be an effective

lie. When somebody asks you, are you a

lie. When somebody asks you, are you a robot? You could give so many answers.

robot? You could give so many answers.

The AI by itself invented the lie that I'm a human with a vision impairment.

And it was a very effective lie because it used human compassion against the human. The AI understood

human. The AI understood if I pretend to be a human with a disability, it's more likely that the other humans will help me. So this is a

very small incident, but it shows us the unique features of AI. The ability to make decisions and the ability to invent new things. An atom bomb cannot make a

new things. An atom bomb cannot make a decision. An atom bomb cannot decide by

decision. An atom bomb cannot decide by itself to bomb this city or that city.

An atom bomb cannot invent a new weapon.

AI can do that. And it is happening all around us. AIs are increasingly like the

around us. AIs are increasingly like the example I gave. You apply to a bank to get a loan. It's an AI deciding. And

maybe in the near future, AI will invent new kinds of money, new kinds of banks.

So this is why it's different from every previous technological revolution. It's

the first time we've produced an agent and not just a tool.

I think it's the long-term perspective because most of the books about AI, they are written by people from computer

science who specialize in computer and they have a deep understanding of AI but not of previous technological revolutions, not of the long-term

history of humanity.

So they don't always offer a lot of perspective on how AI will change uh uh politics and economics and culture

because they don't necessarily understand politics and culture. Uh

Nexus because it comes from a historical perspective. It not only explains the

perspective. It not only explains the technicalities of AI, what the technology is. The real focus is how

technology is. The real focus is how this is going to change our lives, our political systems, our cultures, our our

our econ our economies.

[Music] That it is alien and unpredictable.

I'm I'm saying that it's alien because it it makes decisions and invents ideas in a very different way from us and we

cannot predict what decisions it will take and what ideas it will create. So

it could be good, it could be bad and in effect you know AI is not one single computer. We are talking about creating

computer. We are talking about creating millions upon millions of new agents that will be everywhere. They will be in the banks, in the universities, in the

government offices, in the militaries, in in the news, in the book industry.

They will decide about money. They will

decide about war. They will write new books. They will produce movies.

books. They will produce movies.

So what kind of world will they create?

We cannot predict that. If you can predict what a machine is going to do, it is not an AI.

The characteristic of AI is its ability to uh uh change and learn from experience. So by definition, you cannot

experience. So by definition, you cannot predict what an AI will do. You know

previous machines like if you have say a coffee machine at home you can predict what it will do. You press a button and the machine automatically makes you an

espresso. No surprises there. This is

espresso. No surprises there. This is

not an AI. AI doesn't mean automation.

We had automatic machines long before.

We had washing machines and coffee machines. AI is not about automation. AI

machines. AI is not about automation. AI

is about agency.

For a coffee machine to be an AI, it needs the ability to change by itself.

When is a coffee machine an AI coffee machine? When say you approach it and

machine? When say you approach it and before you press any button, the machine tells you, I've been watching you for

several weeks. I've been learning things

several weeks. I've been learning things about you. Based on everything I've

about you. Based on everything I've learned about you, based on your facial expression, the way you walk, I predict that you would like an espresso. So, I

already made you a cup. Then it is an AI. It made the decisions by itself. And

AI. It made the decisions by itself. And

it's really an AI. If next day it tells you, I've now invented a new drink better than espresso that I think you would like better. And here I made you a

cup. Try it out. This is an AI. when it

cup. Try it out. This is an AI. when it

can invent completely new things. It can

do very good things like invent new kinds of food or medicine and it can be the invention of new weapons or again a entirely new economic systems

that we do not understand.

And this is the danger that we will lose control of our lives and of the world because we can no longer understand

the new decisions and inventions of the AIS.

You are true.

Who am I? Yeah. You know already if you look for instance on social media. So

the most powerful entities on social media that make the most important decisions are no longer human beings. We

have a lot of human beings like us producing content on social media. But

the most important job is the job of editors. Editors decide out of all the

editors. Editors decide out of all the content being produced, what will people see at the top of their Tik Tok account,

at the top of their news feed. This is

the job of editors. In the past like in newspapers or television uh all the editors were humans like out of everything that happened in the world

today what will be on the news in the evening what will be the 10 stories that will be on the news in the evening this was decided by human editors

who are the editors of social media they are no longer human they are algorithms again humans still produce a lot of

content but it is algorithms that out of all these content choose to show people this and not something else. So in in

this field we are already increasingly controlled and sometimes manipulated by the algorithms by the AIS.

Okay.

Okay.

[Music] feeling.

The only thing I can I can think about is feelings. And these two we don't know

is feelings. And these two we don't know maybe in 10 or 20 years AI will actually develop real feelings. We do not understand the human mind. We don't

understand how feelings emerge like you know a feeling like pain or love. How

does how how does it emerge? Science

does not have a clear answer.

uh despite all the research about the brain and the body, we don't know how is it that when billions of neurons interact in the brain and in the body,

this creates the feeling of pain and when they interact in a different way, this creates the feeling of love. We

don't understand this and therefore we cannot predict whether something like that can happen also in a computer in a

non-organic system. At the present

non-organic system. At the present moment I think there is a broad consensus among scientists that computers and AIs don't have feelings.

So this is still what is really special about us and about other animals. But

even this it's not like a high ground that we can be certain that the flood will never reach there. Maybe in 10 or

20 years AI actually develop feelings.

Foreign We Yeah, I I I I think that I I agree to some extent. I definitely agree that

some extent. I definitely agree that feelings are related to our vulnerabilities.

And I think maybe the best definition we have of what is consciousness is the capacity to suffer.

Consciousness is the capacity to suffer and of course also to be liberated from suffering. If you want to know whether

suffering. If you want to know whether an entity is conscious, is sentient.

What you need to ask can it suffer? A

book cannot suffer. It doesn't feel pain. It doesn't feel fear. So it has no

pain. It doesn't feel fear. So it has no consciousness. Similarly, if you think

consciousness. Similarly, if you think about uh um companies or banks, banks don't suffer. Like people sometimes

don't suffer. Like people sometimes imagine companies to be some kind of life form, but they don't feel anything.

And the easiest way to to to see it is that you know if a company goes bankrupt, the human employees of the company may feel a lot of misery, but

the company itself feels nothing. It

cannot a company cannot feel pain or fear. So suffering is the most distinct

fear. So suffering is the most distinct characteristic of consciousness. It's

the only thing in the universe that is capable of suffering.

Stars and galaxies and atoms, they cannot suffer. What makes life so

cannot suffer. What makes life so important, so valuable is that living beings can suffer but also can be happy,

can be liberated from suffering. Now the

big question about AI, I don't agree that AIs will never have vulnerabilities and will never be able to suffer. We

don't know. I mean AIs could also face dangers that they could be destroyed. Maybe you

know they need a lot of resources. You

need electricity for AI to function.

Maybe there is a a a threat to its source of power of electricity and then it feels pain or feels fear. At the

present moment, this is not the case.

But because we don't understand what consciousness is, we cannot rule out the possibility that there will be AI consciousness at some point in the future.

You know all these sounds maybe extremely abstract but everybody can experience it directly that you know when you feel something or when you

think like uh uh consider the next thought that comes up in your mind.

Where did this thought come from? Like

some word suddenly emerged in your mind.

Why? Why this word? What did it come from? We don't really understand that.

from? We don't really understand that.

It's still very primitive. It's just

beginning a very long process of AI evolution that might continue for thousands of in millions of years.

Fore [Music] speech.

My my recommendation would be that nobody has any idea how the job market would look like in 20 years except that

it will be very volatile, very fluid. It

will keep changing again and again. So

don't focus on a narrow set of skills.

Try to acquire a broad set of skills and the mental flexibility to keep learning.

As you said, it it could threaten the humanities but also the sciences. Just

last year, the Nobel Prize in Physics and the Nobel Prize in Chemistry went for the development of AI. Maybe in 10 years all the Nobel prizes will just go

to AI. Nobel Prize in physics,

to AI. Nobel Prize in physics, chemistry, economics, literature, all of them will be given to AIs that write

books or that made new discoveries in science. We don't know. So as a young

science. We don't know. So as a young person, if you focus on a narrow set of skills, you're making a very huge bet.

Like if you think okay I will learn to code computers but maybe in 10 years AI codes computers much better than any human can. So you don't need coders. In

human can. So you don't need coders. In

general humans have three types of skills. We have intellectual skills like

skills. We have intellectual skills like what you use in engineering or science.

We have social skills, emotional skills, how to relate and interact with other humans. And we have motor skills, bodily

humans. And we have motor skills, bodily skills. Now, the easiest tasks to

skills. Now, the easiest tasks to automate by AI are those that demand only intellectual skills.

Like if you think again about medicine, if you have a doctor that only engages in intellectual skills like uh uh the

doctor gets a lot of information about your medical condition, analyzes it um uh discovers what is your sickness and

writes you a prescription. Here take a prescription. This is only intellectual

prescription. This is only intellectual skill. This is the easiest thing for an

skill. This is the easiest thing for an AI to take over.

Because no matter how smart the doctor is, he's always limited or she's always limited. And AI can read a lot more

limited. And AI can read a lot more medical books than any doctor. Can

process information much faster than any doctor. So the easiest jobs to automate

doctor. So the easiest jobs to automate are jobs like this. Now think about the job let's say of a nurse that needs

intellectual skills to some extent but also needs social skills like let's say there is a a child that was hurt and you need to change a bandage and the child

is screaming and crying in order to take care of the child you need some social emotional skills how to how to relate how to talk with the child and of course

you need motor skills how to replace the bandage. It's very delicate. You don't

bandage. It's very delicate. You don't

want to to to harm the child to to cause more pain.

Replacing this nurse, this human nurse with an AI robot will be not impossible but much much more difficult than

replacing the doctor because you need intellectual, social and motor skills at the same time.

So the advice for young people is try to develop all three kinds of skills simultaneously.

You invest some of your time in reading books and learning biology and history and developing your intellect. But you

must also develop your social skills and motor skills.

And most importantly to develop a flexible mind so that as the job market continues to change, you are able to

keep learning and changing throughout your life.

Yes.

Exactly. I mean some of it some of it is in is outside us in society and some of it is outside the brain in our body. I

think there is too much emphasis on the brain in how we try to understand human beings. But we we we are we have an

beings. But we we we are we have an entire body not just a brain. And

actually if you look at the evolution of animals and humans uh there is a good chance that the real center of control of humans even today is not the brain

but actually further down in the stomach. like our ancient ancestors

stomach. like our ancient ancestors hundreds of millions of years ago were worms going, you know, these little worms

which are basically just a tube with two openings. One opening to take food in

openings. One opening to take food in and one opening to expel waste from the body out. That was the animal, a tube.

body out. That was the animal, a tube.

Now this ancient worms, they did not have a brain. They had a nervous system, but the center of the nervous system was

simply in the center of the tube in the stomach where the food was digested.

Over millions of years, more and more nerves evolved around the mouth, the the entry of the tube where you take food.

Why? Because the worm needed to check what food is good, what food is dangerous, what other dangers might be out there. So it developed taste to

out there. So it developed taste to taste the food and smell to smell the food and eyes to see the food. And it

needed a lot of nerves, neurons to control all these senses. So the brain developed near the mouth.

We humans today, we still have the same basic structure. We are still basically

basic structure. We are still basically a worm. Think about us. We are a long

a worm. Think about us. We are a long tube starting here and going twisting until the other side. Of course, we have hands and feet, but these are just uh

additions to the basic structure of a human, which is still a worm. And our

brains are still near our mouths. But

maybe the real center of control is where it was always in ancient times in the stomach. Like if you look at the

the stomach. Like if you look at the human nervous system, you see that there are most of the neurons are here in the brain. But there are also millions of

brain. But there are also millions of neurons in the spinal cord. And just the number of neurons doesn't tell you where

the central of control is. You know, if you look at a country, so you have millions of provincial officials spread

over the country, but the actual decisions are made by a few leaders in the capital city. So maybe the brain is

like these millions of petty officials taking their orders from the big leaders which are still in the area of the stomach.

We don't understand how the brain functions. So you know I I wouldn't

functions. So you know I I wouldn't concentrate too much just on the brain.

I think to understand the human you need to understand the whole body.

Okay.

Yeah.

So you know in the past information was scarce like maybe you live in some small town and there is just one library in the town with just a few books and

that's it. So the main role of schools

that's it. So the main role of schools was to provide children with information.

Now information is overabundant. We are

flooded by far too much information.

Schools don't need to focus on providing kids with more information. What

children really need is the ability to tell the difference between reliable and unreliable information.

Most information in the world is not the truth. Most information is fictions and

truth. Most information is fictions and fantasies and lies and so forth. Why?

Because there are three reasons why the truth is quite a rare kind of information.

First, the truth tends to be costly.

Because it takes a lot of time and effort and money to do research and gather evidence and analyze it. So, the

truth is costly whereas fiction is very cheap. You don't need any research. In

cheap. You don't need any research. In

order to write fiction, you just write anything you want. The truth is often complicated because reality is complicated. Whereas fiction can be made

complicated. Whereas fiction can be made very very simple and the truth is sometimes painful, unpleasant. There are things we

painful, unpleasant. There are things we don't want to know about ourselves or about the world. They make us feel uncomfortable, but they are the truth.

Fiction can be made as pleasant and as flattering as you would like it to be.

So in this competition between costly, complicated, painful truth and cheap, simple, attractive fiction, fiction

tends to win. And the world is flooded by fiction and fantasy and the truth is very rare. So I think the maybe one of

very rare. So I think the maybe one of the most important jobs of schools is to teach kids how do you tell the

difference between reliable and unreliable information like this is what I teach students in history classes in history classes in university

you don't teach students to memorize names and dates of kings and battles from the past they don't need it what they really need is how do I know which

information to trust? So let's say I found a document that says that something happened 500 years ago. How do

I know if I can trust this document? So

there are many many methods. You can for instance test the material of the document the paper. We have scientific

methods like carbon 14 test to know whether this paper was really produced 500 years ago or just a 100 years ago.

And if we see it was produced a 100 years ago, it means the document cannot be from 500 years ago. Like this there are many methods to know the difference.

And I think schools should focus to a large extent on on this task. The other

main tasks of schools is again to help children develop flexible minds so that they will be able to keep

learning and changing throughout their lives. The assumption of schools in the

lives. The assumption of schools in the past was that now the children are learning things in the future they will

work. They will not learn. But new

work. They will not learn. But new

schools should assume that people will continue to learn throughout their lives. That the world will change very

lives. That the world will change very rapidly and dramatically in the coming years. So we need to find new ways how

years. So we need to find new ways how to build very flexible minds that can keep learning and changing even when the

person becomes 40 or 50 or 60.

for [Music] your speech.

Foreign speech. Foreign speech. Foreign speech.

speech. Foreign speech. Foreign speech.

[Music] And the big question is what can humans content providers ers still bring to to the table. We cannot compete with AI on

the table. We cannot compete with AI on quantity. That's for sure. We cannot

quantity. That's for sure. We cannot

compete also in the ability to to consume and analyze enormous amounts of data. So for the present moment, what we

data. So for the present moment, what we can bring to the table is really only our humanity, our ability to feel

things, our life experiences. This is

something that AI doesn't have. And

because as we discussed earlier, it still has no consciousness, no feelings.

So everything it knows about feelings comes from what we tell it. We are still the only ones that can make new discoveries about that. If you think

about other things in the universe, about diseases, about atoms, about physics and chemistry, their AI will

very soon be much much better than us.

It will be very difficult for humans to still make important e discoveries in physics or chemistry let's say in 20

years because AI will just be so much better but when it comes to the human experience to feelings to emotions to relationships

here we still have an advantage as long as AIs don't have any feelings any consciousness of itself self.

Yeah.

Further I agree that this this specific product it's no longer viable for human beings to briefly summarize a book in 30

minutes or in three minutes. AI is now so much better. I mean you when I try to summarize a book I can offer basically one summary but with an AI you can give

it exact instructions like I want you to summarize Nexus in exactly six and a half minutes and do it on in in in a

language which is applicable for somebody who is 14 years old. You can be so specific and the AI will give you

this summary and then you tell it okay now do it 12 minutes for somebody who is 70. It will do it 12 minutes for

70. It will do it 12 minutes for somebody who is 70. Um so here I don't think that humans can compete. Again,

this is going back to uh uh the the the thing that something that involves only intellectual abilities,

we have no chance of competing with AI.

If the task is simply to take data, to take words and process them and the product is more words like you take a

book, you analyze it and then you produce a five minute summary. We have

no chance of competing with AI on that.

So again if taking again this example of a book you can tell readers or audience uh what you felt when reading the book

and this is something AI cannot compete with you right now because it has no feelings.

So we again and again come back to the same conclusion that the key distinction is between intelligence and consciousness. And in intelligence we

consciousness. And in intelligence we cannot compete with AI. It is going to be far more intelligent than us. But in

consciousness at least for now we have the advantage because AI has no consciousness.

Um, I think that together the four books give you a very very big picture of humanity, of our history, you know, from the stone age to the AI revolution. And

they the books are not separate but they reinforce one another. If you understand our history it makes it much easier to

understand what is happening right now and what might happen in the future. And

I don't see history as just the study of the past. I understand history as the

the past. I understand history as the study of change. If we understand how things changed in the past, it gives us insight into how things are changing

right now because you understand the mechanism of change.

Thank you. Thank you.

Loading...

Loading video analysis...