The future of healthcare in the age of AI | Yuval Noah Harari
By Yuval Noah Harari
Summary
Topics Covered
- Self-Correction Is the Ultimate Survival Mechanism
- Algorithms Have Replaced Human Editors in Public Discourse
- AI Healthcare: Egalitarian Revolution or Totalitarian Nightmare?
- The New Battleground: AI Fights for Your Intimacy
- Human Rest Cycles vs. AI's 24/7 Alien Nature
Full Transcript
Wow, those are far away.
Yeah, we have good mics.
You I would like to start with the definition of um communication and and and
knowledge and the systems of checks and balances that we need to have put in
place to assure that one system works properly and it's able to correct act
itself and it applies for a lots of things for democracy, for um social movements, for everything and especially
I think we may be in need of that for AI.
Yeah, absolutely. I mean the ability of a system to self-correct is maybe the most ability for survival and we see it from the level of the
body. you know the health of the body.
body. you know the health of the body.
We rely on health care systems on doctors on medicines but ultimately the most important is the ability of the body to correct its own mistakes and it
starts with you know the simplest things of how a baby learns to walk.
The baby gets some help from parents from teachers but 90 something% of the job is done by the self-correcting
system of the body. You get up, you make a step, you fall down. You get up again, you try something else, you again fall
down. And gradually the system learns to
down. And gradually the system learns to correct its own mistake. And this is how we also walk as adults. There is a self
now a very sophisticated self-correcting mechanism inside our brain in our ear that constantly monitors. Oh, you're too much to the right. Oh, you're too much
to the left. If this system malfunctions, we we fall down. We can't
walk. Similarly, our our blood pressure, our heart rates, all these systems of the body, they work on self-correction.
When you look at entire human societies, they rely the best societies rely on the same type of mechanisms. If you if you
think about democracy, democracy ultimately is the ability to say, "We made a mistake. Let's try something else." In a dictatorship, you don't get
else." In a dictatorship, you don't get that. You get some guy, usually it's a
that. You get some guy, usually it's a guy, and if he starts making mistakes, tough. You can't get rid of them. There
tough. You can't get rid of them. There
is no self-correcting mechanism in a democracy. The whole point of of the
democracy. The whole point of of the whole idea is that every few years you get the chance to try something else.
You give power to somebody. Okay. For
four years you can now uh uh try various policies and after four years you have to give power back and the public can
say okay let's try something else. The
biggest threat to democracy is what happens if you give power to somebody who doesn't want to give it back. Now
they have the power and that includes the power to dismantle the checks and balances to dismantle the self-correcting mechanism of the democracy. This has been the main
democracy. This has been the main problem of democracies from from ancient times from ancient Athens and Rome until today.
You can pursue any policy you like, economic foreign internal whatever.
As long as you preserve the basic self-correcting mechanisms of democracy, that's fine. The public will be able
that's fine. The public will be able later to try something else. And now uh
partly because of the rise of AI, partly because of the rise of new technologies, they are undermining the self-correcting
mechanisms of democracy.
Uh we see it all over the world. A very
paradoxical development that we now have the most sophisticated information and communication
technology in history.
And people are losing the ability to have a conversation, to talk with each other, to listen to each other, to agree on the most basic
facts.
Again, democracy is basically a conversation and dictatorship is one person dictating everything. Democracy is a conversation.
everything. Democracy is a conversation.
And we now see the conversation collapsing.
And one of the reasons is because of the power to manage the conversation has shifted from humans to algorithms.
You know you have a background in journalism. So if you think about the
journalism. So if you think about the history of of you know journalism and media in the 20th century the power of a
human editor was very very big because human editors they managed the public conversation. They don't necessarily
conversation. They don't necessarily tell people what to think but they tell people what to think about.
From all the million things that happened yesterday, what would be the main headline today in the newspaper?
This is what people will be talking about. This was immense power that in
about. This was immense power that in the 20th century was in the hands of human beings.
Sometimes it was it was used badly. Some
of the dictators of the 20th century, they started as journalists.
uh Mussolini in Italy. He was first a journalist, then he was editor of a newspaper and then he was dictator of Italy. So this was the ladder of
Italy. So this was the ladder of promotion.
Journalist editor dictator.
So much for the the reputation we have as as um liberals and democrats.
No, I mean you have different kind. I'm
just saying the power there was immense power. Now if you ask yourself who is
power. Now if you ask yourself who is the most powerful editor today let's say in Brazil I don't want to insult anybody
but um they are they are not in Brazil and they are not even human beings the most powerful news editors in Brazil
today are algorithms somewhere in the US or China they increasingly control the public
conversation and if the conversation is collapsing if people are losing and the most basic ability to to have a civilized conversation I don't agree with you but let's hear what you say and
I'll say what I think and let's see if we can find some something solution together this is collapsing it brings us to the other question that
is how we usually talk about AI as something that is coming. AI is arriving and it's going to transform everything.
But most of the time we don't realize it's already here and most of the things
we do every day is intermediated by AI.
Absolutely. Um AI is already here. It's
been here for a few years. Much of the world of 2025 is the way it is because of AI. There is
a race to develop super intelligent AI, but very primitive AI of the last 10 years was enough to change the world in
many ways. Maybe I said something about
many ways. Maybe I said something about the very definition of of AI. What does
it mean? How do you know if a machine is an AI? The yard stick is can it make
an AI? The yard stick is can it make decisions by itself and can it invent new ideas by itself? A machine like a
coffee machine that acts automatically.
You press a button, coffee comes out.
It's not an AI. It's just doing whatever the human engineers pre-programmed it to do.
A machine becomes an AI if it can make some independent decisions and inventions which were not pre-programmed by the human engineers. The engineers
only gave it the ability to learn but what it learns and how it changes they cannot control and predict. And in in
social media for instance, if you ask yourself who made the decision that the top item on my news feed would be
whatever it this decision was made by an AI.
There is no human editor that decided this.
And this is true for the for the feed of hundreds of millions of other people around the world. This is huge huge power. Similarly, if you look at the
power. Similarly, if you look at the healthcare industry.
So to have AI in health care means something that can not just hold a conversation let's say with a patient
but can detect patterns in your personal health or in the health of the entire country that
nobody no human told this to the AI the AI discovered something by itself otherwise it's not very helpful similar
The AI by itself can make a decision about what medicine to provide to you. And the AI ultimately cannot just detect patterns and
recommend medicines. It can invent new
recommend medicines. It can invent new things. It can invent medicines. Yeah.
things. It can invent medicines. Yeah.
It can invent medicines or therapies that never occurred before to human doctors.
We're on on the brink of having big big breakthroughs in medicine with the help of AI. We've been emphasizing the
of AI. We've been emphasizing the negative parts of AI, but there are some good things that can come from from AI
engaging in the discovery of new medicines um to think of solutions um to to
um develop customized drugs um all the things or developing organs
for transplant.
It's a whole new frontier for medicine.
So what's what's at stake here with the the AI
entering health care and um drugs and um the development of medicine?
The the the opportunities are enormous.
The question is in what direction will the effort go? For instance, will the focus be on developing
some kind of ultra sophisticated new medicine for the top 1% of the population or the emphasis will be on
deploying AIs for the benefit of the vast majority of of the population.
Uh so for instance, one of the most positive potentials of AI is the ability to provide the poorest people in the
most remote regions of the country of the world with access to a kind of health care that today even billionaires sometimes don't get. Like imagine that
you're in some remote village in the Amazon. You are hours days away from the
Amazon. You are hours days away from the nearest hospital. There is no
nearest hospital. There is no specialists around. Very difficult to
specialists around. Very difficult to get good medical just diagnose you. You
you feel bad. You feel something is wrong with me just to get a diagnosis.
It's so difficult. AI can make it so easy. If you just have an internet
easy. If you just have an internet connection, you can talk as long as you like with the best medical experts in
the world's AIS that have no time limit and have no space limit.
Like um presently you want to talk to the top expert, you have maybe to fly to Sa Paulo or to New York and that person is very busy. She doesn't have time to
talk with you for hours. She can only spare you, I don't know, 10 minutes. But
AI can become a much better expert. AI
can read all the medical literature on a specific subject and remember everything in a way that a human doctor can't. AI
can go over your entire medical record, not just of yourself, also of your family, also of millions of people like you.
It in terms of information and knowledge it can far surpass any human being and it can be be made easily and cheaply
available for everybody all the time.
It's 2:00 in the morning and you want medical advice, it's there. You don't
need to go anywhere. Of course, if you need say some specialized surgery, that's a different issue. But in fields
like diagnosis and even you know the the the AI diagnosed a certain condition it can decide what is the best medicine for you personally based on your medical
history on your parents medical history on your DNA on everything that happened to you through through your life and can then send you the medicine with a drone.
So in this kind of health care again this personally remote village in the Amazon can get better health care than a
billionaire today in Sa Paulo.
The question is where do we put the investment? There is a limited amount of
investment? There is a limited amount of resources in the world. So do and so okay we invest in AI but do we invest in
the AI that will give this kind of egalitarian health care system to the masses or do we want to develop the AI that will give the super rich and
powerful eternal life you know like was like a month ago two months ago that Putin visited Chi in Beijing for this parade and they were caught on the microphone didn't realize the microphone
was on and the world could eavesdrop.
What do these people talk about when they are in private and they think nobody is hearing? And what did they talk about? They didn't talk about the
talk about? They didn't talk about the war in Ukraine. They didn't talk about the war in Gaza. They talked about extending their life forever. They want
to stay alive and stay in power of course forever.
So one of the big questions of like AI in healthcare is where will the investments go to provide this uh uh uh
cheap egalitarian health care for hundreds of millions of poor people in the most remote regions of the world or to provide Putin with eternal life. It's
not the same thing.
And there's are other questions that come when you said they can know all my medical history, my DNA, everything. So
in the ethical point of view, this could impact my chances of employment, my credit record, it could
impact the amount of money I have to pay every month for healthcare insurance.
So what are the ethical and and and moral let's say uh questions that
such a big knowledge about your individual health or the health of a whole country can in imply?
Yeah, it's a very important issue. Um
such knowledge has enormous positive potential but also enormous negative potential if we don't handle it
with care. in a capitalist country, it
with care. in a capitalist country, it can lead to uh uh results like you mentioned that if my private medical
data is available to my boss, my insurance company, uh potential dates, this could impact in a very negative way the way they they relate to me, they
treat me. uh in a a a totalitarian
treat me. uh in a a a totalitarian country, in a dictatorship, it can have even worse consequences, it can result in the most totalitarian system of
surveillance ever encountered on the on in history. You know, we had some pretty
in history. You know, we had some pretty horrible totalitarian regimes in the 20th century, the Soviet Union, Nazi Germany, but even in the Soviet Union,
people still have some measure of privacy because the KGB couldn't technically follow everybody all the time. You have like 200 million Soviet
time. You have like 200 million Soviet citizens, but the KGB doesn't have 200 million agents.
So, um, they can't follow everybody all the time. And even if an agent is
the time. And even if an agent is following you all the time, they don't have the analysts to process the data.
So let's say somebody not one agent because even KGB agents need to sleep sometime. So they have two agents or
sometime. So they have two agents or three agents following you all the time.
At the end of the day, they write a paper report, they send it to KGB headquarters in Moscow where they get millions of reports every day. They
can't analyze it. So even if a KGB agent saw you do something you shouldn't, chances that it will just get lost in
some office in Moscow, AI can be the be the basis for the worst totalitarian surveillance regime in
history. You don't need human agents to
history. You don't need human agents to follow people around. You have cameras and microphones and computers and drones everywhere. They can easily follow you
everywhere. They can easily follow you 24 hours a day. They can even know what's happening inside your body. And
you don't need to send paper reports to headquarters in Moscow. You have AI in the cloud constantly analyzing the data.
So if we don't protect the privacy of our information, it can lead not just to the best health care system in history, it can lead to the
worst totalitarian surveillance regime in history.
The key to preventing these dangers or one of the keys is never allow all the information to be concentrated in just one place.
It's true that it's more efficient. If
you concentrate all the information in one place from the health care system, from the police, from the banks, from the employers, you can discover a lot of
very interesting patterns and you can be very very efficient.
But this will be a dystopian society. We
need to keep different information banks separate.
It's like big brother on asteroids completely.
um it's less efficient but in this case it's not a bug it's a feature. We don't
want the boss and the police and the insurance company to be too efficient.
Too efficient in the sense that people are left with no privacy and no free will and no control of their lives. So
one very important thing is yeah you set up this huge health care system that collects information on millions of people and this is used to discover h
epidemics when they just start or to discover new drugs. Fine. Don't let the police into that datab bank. Don't let
the insurers into that datab bank. It
has to be kept separate.
The second thing is whenever you increase surveillance of individuals, it can be for good purposes like surveillance of our our health care can
be for good purposes can help us stay healthier but it must be balanced with simultaneously increasing surveillance of the big corporations and the
government departments that are making use of all this all this information. If
they know more about me but simultaneously I know more about them this keeps the balance. So okay there is a big corporation that knows so much
about me but I also know about it. I
know if it gets if it gives money to politicians I know if it pays it taxes honestly then there is a balance. The
danger comes when surveillance is just oneway traffic. when say big
oneway traffic. when say big corporations know enormous amount of things about me and I don't know anything about them. I don't know what are their tax policies. I don't know
what are their political alliances and and so forth. So keep it balanced that this is the second principle.
In your book Nexus which I strongly recommend if you haven't read it yet please do. Um, you said that AI can
please do. Um, you said that AI can become so powerful as to reprogram
our bodies. Could you
our bodies. Could you explain it better? And how could that be possible?
And for better or worse, AI will be able to decipher the mechanisms, the inner mechanisms and systems of our bodies, of our brains in
a way that humans just can't simply because they are so superior to us in the ability to absorb and to analyze
data. Ultimately, humans are biological
data. Ultimately, humans are biological systems. Um, diseases of the body, even of the
mind, they come from biological patterns. Our emotions, anger, love,
patterns. Our emotions, anger, love, fear. They are also biological patterns
fear. They are also biological patterns that we share with other animals. Other
animals also feel love and fear. Now
these are such complicated uh uh patterns and systems that until today it was easy to hold these kind of
mystical ideas about them because they were really beyond the reach of human understanding.
No, you have like billions and billions of neurons and synapses in the brain firing together in a kind of storm of h
electricity and this manifest itself as love or as hate. But it's so complicated that human science have made only relatively
small progress in understanding what is actually happening there.
But AI presumably can do that in a few years or in a few decades. Ultimately, it's about
pattern recognition, recognizing the patterns of biological activity. Maybe I
I go a bit back and and and tell something about the history of medicine and pattern recognition. One of the biggest breakthrough in public medicine
and in epidemiology came in the 19th century with the identification of the cause for cholera.
In the 19th century the world suffered from repeated waves of cholera epidemics spreading all over the world and there were competing theories. What is causing
cholera?
uh lots of lots of different theories.
A crucial moment in the history of public medicine came in the mid-9th century when there was a colera epidemic in London, the biggest richest city in
the world at that that time suffered terribly from repeated waves of cholera.
and a doctor called Jon Snow, not the one from Games of Thrones, and another even better Jon Snow. He suspected that
Colora has something to do with water.
This was his theory. And how did he how did he go about checking his theory? By
collecting data and finding patterns in the data. He simply went from door to
the data. He simply went from door to door in London. Uh whenever he heard that somebody got sick or died from
cholera, he would rush to the place and interview the family to collect data on its uh uh uh water habits. Where do you
get your drinking water? And he had these long lists of somebody died from colera, somebody fell sick with cola.
Oh, and they got the water from here.
they got the water from here. And just
by analyzing this data, he found a pattern that the the vast majority of the people who fell sick from cholera,
they at least occasionally got their drinking water from a certain well in Broad Street in Soho in the
center of London today. And he went with this data to the uh public officials and convinced them to disable this pump this
well and the epidemic stopped.
And this proved this was one of the main proofs that yes colera is caused by uh uh something in the water. And further
uh tests discovered that the well was contaminated with sewage.
And this was one of the reasons that led cities and countries all over the world to start building separate sewage systems and drinking water systems which
sounds so obvious to us but was not the case for most of human history. And this
was ultimately pattern recognition recognizing a biological pattern. You
drink your water from this well you become sick with cholera. Now this was done on the level of a whole city. AI
can do that on the level of one body.
looking inside our bodies and finding patterns, in our sickness, in our uh h sleep
cycles, in our mental health, and in this way decipher human biology with enormous positive as well as some
negative potential if it falls into the wrong hands.
How far are we from it? If AI continues developing in the pace it is today or in the pace that AI developers
uh want it to develop.
It's very difficult to say because it it it moves at an accelerating speed.
10 years ago hardly anybody talked about AI. Hardly anybody knew what AI is.
AI. Hardly anybody knew what AI is.
Today every morning you open the newspaper or the news feed it's all over. AI. AI, AI, the whole US economy
over. AI. AI, AI, the whole US economy to some extent is one big bet on AI. You
look at the New York Stock Exchange, if you take AI out, it crashes, collapses.
So, enormous amount of money and talent are put into it. So nobody knows whether it's two years or five years or 10 years
but it is moving at an enormous speed which is a danger in itself because it means that we don't have time as human societies to adapt to it. I think in
terms of physical health um it will probably bring enormous benefits in in a relatively short time.
Whether by discovering new medicines, whether by being able to pinpoint epidemics, not like the color epidemic with Jon Snow that only afterwards he
discovered, oh there is something wrong.
You can you can see it in real time that an epidemic is just beginning and stop it.
Um, and like we said before, systems for making medicine far more egalitarian, available easily and cheaply for a much larger percentage of humankind.
The bigger dangers I think at least in the immediate future will be in terms of mental health which we already we already see it now.
uh AI is good not only at deciphering the mechanisms of the body to see what is causing cancer let's say it is also
good at deciphering the mechanisms of the mind to see what is causing anger or love or hatred and this is already being
used to manipulate millions and billions of people around the world again we saw it with social media that basically the task that was
given to social media algorithms by the big social media corporations was not destroy democracy. It was not
destroy the conversation. The task they were given is increase user engagement which sounds very nice. Who doesn't want
to be engaged? But the idea is user engagement means increase the amount of time customers users are spending on my
platform. This was the goal given to the
platform. This was the goal given to the social media algorithms and the algorithms discovered by trial and error by basically experimenting
on billions of people they discovered a mental pattern. The easiest way to grab
mental pattern. The easiest way to grab your attention to make you engaged is to press the hate button or the anger
button or the fear button in your mind.
Hatred is very engaging. Fear is very engaging.
And they learned how to do it. They
basically hacked the human operating system and discovered its weaknesses.
How do you make people fearful and then they stay longer on the platform? And
for the algorithms, because they only cared about engagement, for them, two hours of hatred is better than 20
minutes of compassion.
For us, it's obvious that no, no, no, it's the other way around. But if you only and this is the the pro one of the problems with AI, they do what they what you tell them to
do without necessarily have a deeper understanding of the implications because the metrics for for for the
metric for measuring success they were given was simply engagement.
They did not see the complexity of the human mind or you say of the human soul that there is a difference
between being engaged with hatred and being engaged with compassion. No, it's
all engagement and I need to maximize it. So two hours of hatred is far far
it. So two hours of hatred is far far better and no checks and balances put in place.
No checks and balances to check it. uh most of the countries the governments did not even understand what was happening. Okay, so lots of people are scrolling on their phones.
What's wrong with that? Now we know and in a way it's too late because so much of our society of our politics is based
on it that it's now so difficult for to regulate the basis of the system. And
now we see an even bigger danger coming.
The last 10 years were a battle over attention.
Again, you had the the the Tik Tok algorithm and the Facebook algorithm and the uh uh Instagram algorithm competing for your attention. How to grab your
attention and keep you on the platform longer.
Now, the battlefront is shifting from attention to intimacy.
Whoa. Previously,
AIS could not create intimate relationships with human beings. They
were not sophisticated enough. They
could grab your attention, but they could not really become your friend or your lover.
Now, what is happening right now and not in five or 10 years, AIS are learning how to develop intimacy with human
beings. And there is enormous commercial
beings. And there is enormous commercial and political pressure to develop intimate AIs. AIS that can create
intimate AIs. AIS that can create relationships with human beings. Already
a growing percentage especially of young people say that one of my best friends or some of them say my best friend in
the world is an AI. Always there always listening.
um they have you know when you try to develop a relationship with another human being part of the problem is that the other human being also has feelings.
So the other human being might become angry might become upset maybe they come back from work that they don't even pay
attention to you they can't we always want somebody to care about how we feel.
We want our parents, our teachers, our friends to care about how I feel. And
very often they don't because they are engaged with their own feelings. AI has
no feelings. So it is focused like a 100% just on your feelings and it knows how to react and sometimes
how to manipulate your feelings better than anybody else on the planet. Like
it's always there, constantly there for you.
And we are now beginning to see uh a another mental health epidemic around this issue of humans losing the ability
to have a meaningful social relationships with other humans. And the
AIS, some people say they fill the vacuum, a lonely person, no friends, so they talk with the AI and and the AI is the
friend. But it's of course making things
friend. But it's of course making things worse because it even it lessens the the possibility that but the potential that that person will be able to learn how to
form a meaningful relationship with another human being.
Wow.
And this is in a moment where AI still doesn't have conscience.
That's that's Can it learn to have conscience? Can it
learn to have feelings or to express feelings as empathy?
There are two related questions here.
Can it have feelings and can it mimic feelings? The second question is much
feelings? The second question is much easier. We know the answer. The answer
easier. We know the answer. The answer
is yes.
AI already knows how to mimic feelings, how to cause human beings to be convinced that the AI has feelings, has consciousness, can feel love and pain
and so forth. Um, in professional circles, it's sometimes called sky, si, seemingly conscious AI.
Uh Mustafa Sullean, one of the foremost developers and thinkers about AI, also wrote a wonderful book, a very important book called the coming wave, recently
wrote a piece about this sky, seemingly conscious AI, an AI that can simulate consciousness and feelings in such a way
that you cannot tell by interacting with it. You think it has feelings because it can you know how do we know
that somebody else has feelings? If we
are in the same room with there and we can act with their body that's one thing but online how do you know if somebody has feelings? You talk with them but AI
has feelings? You talk with them but AI has ma is mastering language.
Everything that depends on language will be taken over by AI very shortly. AI I'm
I'm an expert on language. I write books and I can say from my in my field language words AI is becoming better
than me in its understanding and mastery of language. So if you want to base your
of language. So if you want to base your assessment whether somebody has feelings on language like you say you love me
describe to me how love feels. AI is to already today able to describe how love feels better than 90 something% of
humans for a simple reason. It it has read all the love poems in history.
Basically it has we saw so many romantic movies and whatever if it's only words it will be better than us and the same
in so many other fields in religion if you think religion is ultimately words it's a text it's the Bible it's the Quran AI will take over religion it will
understand the Bible and the Quran and all these other textual tradition better than any human being if you think finance is ultimately about words and numbers.
AI will take over finance will be better than human beings. There is a very old philosophical even spiritual question in humanity.
Is there something beyond words? Is
there something beyond language? You
know the Dao de Ching the basic text of Daoism opens with the sentence the Dao that can be expressed in words is not the real Dao. The truth that can be
expressed in words is not the real truth. You have to go beyond the words
truth. You have to go beyond the words to the experience to consciousness to feelings.
Anything that is just on the level of words AI will be better than us.
Uh so if consciousness is the ability to convince other people with words that you are conscious, AI will be conscious.
The second question is it really conscious? Is there something beyond the
conscious? Is there something beyond the words? That's the deep question and we
words? That's the deep question and we don't know. We don't understand
don't know. We don't understand consciousness in human beings well enough to tell whether consciousness can
exist on a non-organic basis.
Is there something special about carbon atoms and organic chemistry that only organic bodies can sustain consciousness? Again, the ability to
consciousness? Again, the ability to feel things, we don't know.
Um basically the the difference that should be very clear between consciousness and intelligence. Intelligence is the
intelligence. Intelligence is the ability to solve problems, to reach goals, to master things like language and mathematics.
Consciousness is the ability to actually feel something, feel love or hate or pain. In humans, the two go together.
pain. In humans, the two go together.
So, it's very difficult for us just to separate them. What does it mean to have
separate them. What does it mean to have intelligence but not consciousness? But
this is the biggest question about AI in if it in in terms of intelligence, it will surpass us. It will be better at
composing poems, at interpreting the Bible, at uh uh diagnosing diseases, at playing chess than human beings. In
terms of consciousness, we don't know.
And it will be extremely difficult to know because it will be so intelligent it can fool us. If it has an interest in
convincing us that it is conscious, it will know how to manipulate us into thinking that it is conscious.
which which brings us to another question that is the interaction between a doctor and a patient.
Four years ago you said that all doctors would be replaced by AI and I was thinking well but there's the human
interaction with the doctor. Um I trust my doctor. um he asks me questions that
my doctor. um he asks me questions that are not related to if I have a fever or you know there's a human connection
there and do you still think that doctors will be replaced and how will that play for for us as patients?
It depends on what is the job of the doctor. If the job of the doctor is to
doctor. If the job of the doctor is to diagnose your disease correctly and recommend the best treatment, then I don't know how long it will take. But I
still think that eventually yes, AI will just be able to do it better. And if you tell me, but yeah, but to really get a good diagnosis, you have for instance to
start by making the patient feel comfortable to share private information. So you start by asking them
information. So you start by asking them about their children, about their cat, whatever. That's a pattern. AI is world
whatever. That's a pattern. AI is world champion in recognizing patterns. If AI
goes over lots of doctor patient interactions and discovers, hey, when I ask the patient about their cat, afterwards I get better information,
I'll ask about the cat. And again, it has no time limit. Like if you're a human doctor and you have to see one patient after the other, you just have five minutes for that one. So you
quickly ask something about the cat and then gem jump to to ask about I don't know the knee problem. AI has no time limit. It can talk about your cat for
limit. It can talk about your cat for two hours and it can do it in the mo in the nicest way possible because it's never angry. It's not hungry. It's not
never angry. It's not hungry. It's not
in a rush to get back to home for for for its children because it has no children. So even these aspects
children. So even these aspects the question about empathy is what do you what do we actually mean by it the
deepest level of human relationships is not a desire that I want somebody to care about my feelings it's the opposite it's I want to also care about their
feelings if AI has no consciousness then the one thing it cannot provide us with is for instance with the possibility of developing our
compassion. You cannot feel compassion
compassion. You cannot feel compassion for an entity lacking in feelings. If
the main thing about relationship is not this egotistical drive, I want you to look at me, but it's actually developing the ability to look beyond myself and
see somebody else. This is something that AI will not be able to do for us unless it becomes conscious. And again
this is when the question but is it really conscious becomes crucial.
AIS will be able to simulate.
They can Oh, you want to care about feelings? Fine. I'll tell you all about
feelings? Fine. I'll tell you all about my mother.
But is it just words or is there something beyond the words? And it's one of the characteristics of our age is
that in one field after another, very old philosophical problems are becoming practical problems. Things that bothered a few philosophers
for thousands of years and most people just ignored they are now becoming practical daytoday problems
like I don't know your teenage child fell in love with an AI boyfriend. What
do you do with that?
One of the questions you ask is it a real relationship or is it just a kind of dystopian
makebelieve cocoon that they are entrapping themselves inside. And this
is increasingly a real question for parents around the world because you have more and more teenagers developing relationships and even falling in love
with AIS. Now if you ask me at present I
with AIS. Now if you ask me at present I would say this is there is no consciousness on the other side. This is
a very dangerous thing.
But in a few years I mean already now many people don't agree with me. I
almost every week get like emails from people telling me oh I watched your interview here and there on television whatever and you said that AIs don't
have consciousness and this is not true.
I have an AI friend and they send me transcripts of their conversations with the AI proving that the AI is conscious and some blaming me for you know
supporting the new type of slavery that we say the AIs don't have consciousness so we can enslave them but actually they do have consciousness and we need to grant them rights.
It may sound funny to some people now but in two years, five years, 10 years it will be a major major issue. Uh
almost all countries will have to grapple with the question for instance of do we uh uh recognize AIS as legal persons?
What is a legal person?
In the judicial system for instance it means that this entity can hold a bank account.
What do you need in order to hold a bank account? Now, if you need the ability to
account? Now, if you need the ability to manage money, to invest money, AI will soon be able to do it better than most humans. It will understand finance
humans. It will understand finance better than us. But you say, "Wait a minute, another thing I want with a bank account is legal responsibility."
Like if you donated money to a terrorist organization, I need to be able to take you to court.
Now, if AI has a bank account and gained billions of dollars trading in a stock exchange and now is using its money to finance terrorists, can you take it to
court? What will you do to it? Put it in
court? What will you do to it? Put it in jail? How do you hold an AI
jail? How do you hold an AI morally and legally responsible?
What happens if some countries recognize AIs as legal persons but other countries don't? So you have maybe a
countries don't? So you have maybe a company that is owned by an AI and one of the things that the legal person can do they can be a shareholder they can be
a director. So imagine a company that is
a director. So imagine a company that is owned and managed by AIS and selling products here in Brazil and these products cause some harm. Who do you
sue?
And in the medical uh field, if you have AI, who do you hold responsible if AI
makes a bad medical decision? If it
decides to end medical support, for instance, uh to a patient because it's too costly and the algorithm is
programmed to find the cheapest way of um treat patients. So in the medical field it seems to me that the this
ethical and legal juridical questions are even more pressing.
Absolutely. I mean liability for better and for worse is one of the biggest hurdles for the development of these kind of AI healthcare systems. Everything sounds
wonderful until you raise the question okay you have this person in the remote village in the Amazon. It was he was she was diagnosed by an AI. The AI made a mistake,
sent her the wrong medicine and she now suffers from some debilitating consequences or even died. Who is
liable?
Uh is it the person maybe in China or the US who developed the AI?
Uh but the AI again it was not told to do this diagnosis. It was only given the ability to gather information, learn by itself, make decisions by itself. Is it
really the fault? It's like blaming a doctor like a doctor made a mistake. You
go to the professor that told them at university 20 years ago and you charge them. No. But then if you don't charge,
them. No. But then if you don't charge, if you don't blame the developers, who do you blame? Who takes responsibility?
I don't know the answer.
These are the new type of questions that we'll encounter in more and more fields.
Again, do how the AI revolution is not about some big computer someplace. It's
about millions and millions of agents inserting themselves operating in more and more systems. We've already seen it
in social media. Social media is a system which is populated partly by humans, partly by algorithms and bots.
And in social media, bots are legal persons. A bot can now write a story or
persons. A bot can now write a story or spread some propaganda or fake news or even real news like a person. And we
have a big problem. Who is liable? If a
bot spread fake news, who is liable? Now
this is likely to happen in more and more fields like so in healthcare AI will not be one big computer somewhere inventing medicines. It will be millions
inventing medicines. It will be millions of AI doctors interacting with people constantly on a daily basis. And the question for
daily basis. And the question for society is how do we deal with these new persons, new agents? Now I I would like
to warn people about the two extremes of reaction. Either the kind of uh h
reaction. Either the kind of uh h dystopian or overly pessimistic reaction that oh this is so scary let's just shut the whole thing off. It won't happen
partly because there is enormous positive potential. We don't want to
positive potential. We don't want to deny it. We can create the best health
deny it. We can create the best health care system in history with this technology. But we should also be aware
technology. But we should also be aware of the overly positive uh uh idea that ah don't worry about all this we don't
need any regulations just release it the market will correct itself it not it won't I mean we saw it again with social media social media is kind
of our experiment real life historical experiment the first system on the planet which is a hybrid system
populated by human persons and AI persons and it doesn't look very good.
So this will be increasingly the case with the health care system, with the military, with religion. People will
turn onto the priest, they will turn to the AI. And the job for us is not to
the AI. And the job for us is not to kind of freak out.
The job for us is to think deeply, understand, okay, this is reality, this is happening. How do we manage this
is happening. How do we manage this transition in a responsible way?
I quote um from Nexus again.
You wrote, "Every smartphone contains more information than the ancient library of Alexandria and connects
billions in seconds. Yet
with all this power, humanity teters closer to an inhalation.
Were you thinking of AI, climate change, all of the above?
All of the above. You know, you mentioned climate change. It's very
closely connected with AI partly because the AI revolution demands enormous amounts of energy and we now
see countries like the US just saying forget about climate change, forget about the ecological problems. We are just we need all the energy we can get to win the AI race. And if you press
them and and and what about the ecological crisis? AI will solve it.
ecological crisis? AI will solve it.
Once we have super intelligent AI, it will we don't know how. It will solve the climate crisis. It will solve any ecological crisis even though onethird of all living
species are extinct.
And again, I think that the the even deeper problem which this points out with regard to AI and the ecology is
that AI is not an organic system.
It's really alien in a radical way.
Humans are organic. Animals are organic.
Plants are organic. This whole planet is covered by this organic ecosystem. And
now we are inserting into the organic ecosystem a nonorganic alien entity which is likely to become
far more intelligent than us. And
because it is not organic, it has no vested interest in the survival of the organic system of the ecological system.
AI doesn't need the Amazon forest. AI
doesn't need the the dolphins in the ocean because it's not organic.
And we already see it to some extent on the level of our personal lives. What
happens when an nonorganic system intelligence takes over an organic system?
um organic systems. One of the things that characterize them and that is essential for their health is that they work by cycles.
Day and night, summer and winter, activity and rest. Ask your doctor. If
you don't sleep, you die. If you keep an organic entity, a human, an animal, whatever, active all the time, no sleep,
no sleep, no rest, no rest, no rest, it eventually collapses and dies.
AIS are not like that. They don't work by these organic cycles. They don't care if it's night or day. They don't need to rest.
Every system taken over by AI becomes restless.
And the humans in the systems are not given a chance to rest in media. Previously, you know, we
in media. Previously, you know, we mentioned the big editors of the 20th century. They needed to go home and and
century. They needed to go home and and take sleep sometime.
Politicians needed to sleep. Bankers
needed to sleep. So, you know, Wall Street is traditionally open only, I think, 9:30 in the morning to 4 in the afternoon, Mondays to Fridays. Saturday,
it's closed. Christmas is closed. If a
war in the Middle East erupts on Friday evening, Wall Street can react only Monday morning because the bankers and investors are supposed to be asleep or
in church or in synagogue, whatever. AIS
don't. So we see the cycle of news in the media, of politics, of investment, of banking becoming 24 hours, 365 days a
year. You want to rest, you're being
year. You want to rest, you're being left behind. And people like
left behind. And people like journalists, like politicians, like bankers, they're on the front line. They
realize, I can't take rest. I can't take vacation because then I'm being left behind. Why? because you live in a
behind. Why? because you live in a system managed by a non-organic intelligence and as this spreads to more and more systems again it has an advantage in
healthcare it's good that if it's 2 o'clock in the morning you can reach your AI doctor it's always there but we need to build the systems in such a way
that okay the AI can be on all the time but it cannot require the human to be on all the time if you one of of the things
that we see around the world is that the whole world, humanity is a being driven to extremes
because it's not being given any chance to rest and to relax anymore.
And if we can't, again, this is ba biasic healthcare. If we don't give
biasic healthcare. If we don't give humans a chance to rest, they go crazy and eventually they collapse and die.
So the question is yes we want AIs to come into different systems from healthcare to to the electricity grid to
make it better but how do we make sure that it's humans who stay in control and that the humans remember that they
are still animals. They are still organic beings. They still need time to
organic beings. They still need time to rest and to sleep and to go on vacation.
Otherwise, all the intelligence in the world will not save us from the consequences.
Well, thank you so much. We're just in time.
It's a shame to have to finish. I would
stay here listening. You need to rest.
But, you know, I'm not tired of listening.
Well, thank you so much. I think we on.
Thank you.
It's really a privilege to have one hour with you sharing your thoughts with us.
Thank you. Thank you so much.
Bridge
Loading video analysis...