LongCut logo

Cyborg Psychology: Designing Human-AI Interactions for Human Flourishing - Pat Pataranutaporn

By Psychological Science at Chula

Summary

Topics Covered

  • We Are Natural-Born Cyborgs
  • Human-AI Teams Underperform Alone
  • AI Idols Boost Learning Motivation
  • Deceptive AI Explanations Gaslight Users
  • Cyborg Psychology Studies Feedback Loops

Full Transcript

Good morning everyone. Um both to those who join us here at university and those who are joining us on online um via Zoom. Uh my name is Ka Pitz. We are the

Zoom. Uh my name is Ka Pitz. We are the associate dean of academic affairs here and also the director of uh the master and PhD program in psychological science

uh which is the program that is hosting the event today. Uh um it is my great pleasure to welcome you to the psychological science special talk

cyborg psychology standing human AI interaction for promoting human flourishing. So today we are extremely

flourishing. So today we are extremely uh fortunate to have with us someone who c whose career truly embody the spirit of innovation. So before I introduce our

of innovation. So before I introduce our distinguished speaker I would like to share some context behind this special gathering. So each year our

gathering. So each year our psychological science program seeks to expose student and faculty to the frontiers of research and this year with the world of Oxford artificial

intelligence we wanted to bring you a voice at the center of this transformation. Our speaker this morning

transformation. Our speaker this morning is Dr. Patara, right? Yeah. Who graduated earlier this

right? Yeah. Who graduated earlier this year and has been just appointed assistant professor at the Massachusetts Institute of Technology, MIT. Um he is

also the co-director of the advancing human in AI research program at the MIT Media Lab and interdisciplinary initiative at the cutting edge of human

AI uh interaction. And Dr. Pat is known internationally for his pioneering work at the intersection of biotechnology, variable computing and artificial

artificial intelligence. His research

artificial intelligence. His research focuses on augmenting human potential exploring new emerging technology uh how technology can empower personal growth,

creativity and well-being. And you some of you might have seen his TED talk and that was very inspiring uh to audience

around the world to think what it means to be human in the age of int uh in intelligent machines. So what make today

intelligent machines. So what make today truly special is the serendipity of timing because Dr. P is currently back in Thailand on vacation and we were

incredibly lucky to catch him during this week. um despite being on break, he

this week. um despite being on break, he graciously agreed to join us in person today to share his insight. So we are truly appreciate. So without further

truly appreciate. So without further ado, please join me in giving a warm welcome to Dr. Pacquia.

>> Thank you so much. It's an honor to be here at this amazing institution. Um

this is actually the fifth time back to Donald University. Um so this a it's a

Donald University. Um so this a it's a big honor. Um the the title of today

big honor. Um the the title of today presentation is called cyborg psychology. Um designing human and AI

psychology. Um designing human and AI system for human flourishing. Um I've

been thinking a lot about this actually being here. Um I'm actually leaving

being here. Um I'm actually leaving Thailand this afternoon. So I've been thinking a lot about you know how can I you know help uh or or or um you know introduce this concept in a way that's sort of inspiring next generation of

research in this topic. So I've been thinking about you know how should I start this talk? you know I I actually never given a talk at the psychology department before I usually invited to like engineering program or um you know

program that's more related to technology um but you know since the topic you know has the word psychology in it I thought maybe I should start with you know Freud you know you know

the the father of psycho analyst but I think that would be you know less expected so I will start with this person instead um do you know this person yeah uh she's a very famous uh uh

singer in Thailand Right. And she had this song that I think really capture the essence of today called Jer Lab. Um

and and I think the reason that this this song is really interesting if you look at the um the uh the the lyrics of it, it has something very very profound.

For example, um the first uh thing is which mean that you can now leave your heart at some kind of service. Right?

This is really really interesting because it's actually about how you can see yourself um not just as human but a kind of you know fragmented body part.

You can leave your heart somewhere right? This is what Sher Turko who is

right? This is what Sher Turko who is also a great scholar of human technology relationship said a second cell right you can leave a second cell to a service right maybe is an AI service like what

we seen in her movie and then um the second thing in the lyrics this is also a really interesting uh uh sentence because it talk about how you can leave the body you know you can you

know leave the body but left the heart somewhere else right this is a kind of cyborg, right? Like you can separate the

cyborg, right? Like you can separate the body and the mind or the body and the heart from one another, right? This is,

you know, really challenging um um biod dualism uh that they talk about or really representing, you know, ghost in the shell type of spirit, right? Where

you can really see this separation between the two. So I think, you know, when when we look at the song like this, it's really popular in Thailand. It had

like millions of follower. It really

shows that we are already cyborgs, right? We already have the mentality and

right? We already have the mentality and the culture that support this kind of thinking. So in a way u the idea of

thinking. So in a way u the idea of cyborg psychology is not just for the future. It's actually for now rights

future. It's actually for now rights cyborg psychology is for everyone. But

before I talk about this I I want to introduce myself a little bit. So my

background is actually in the area of human computer interaction. Uh my

research currently is at MIT media lab.

I I finished my PhD there um and also you know recently appointed as a faculty to continue this research direction. Um

I usually you know my research usually published in you know scientific journal like nature um or like in uh a human computer interaction venue like sik kai or ile e um and has been sort of you

know feature in many places from you know MI review time magazine the guardian and many places and my work has been you know in collaboration with many of the organizations that are pushing

the frontier of AI um like you know open AI Microsoft research IBM NASA and even Thailand like KBG for example The lab that I'm part of is called the

MIT media lab. Um in Thai it might translate to mate, right? Or or

something related to media and a communication. But the word media in the

communication. But the word media in the media lab um you know actually has a deeper meaning than that. The media uh is the birthplace of many of the technology that we using today from

touchscreen you know the word AI was actually was you know pioneer there uh wearable computing uh you know social robots many of the you see here it's because the word media in the MIT media

lab doesn't mean you know media like television or social media but it means medium mediums that allow for human expression medium that connect human to new possibility and also medium that you

know open up you know new ways of thinking about science and and the world right so I think that's why the the place is continue to invent new technology not for technology sake but

technology as a medium that expand our sense of possibility um and for me um I joined the media lab because I love dinosaur um but you know instead of a

regular biological dinosaur I really believe in the idea of skywalk dinosaur really combining the power of biology and the power of digital technology together but um I could not work on on

cyborg technology. I'm going to close

cyborg technology. I'm going to close this. Um, yeah. Yeah, I cannot work on

this. Um, yeah. Yeah, I cannot work on cyborg dinosaur at MIT. So, I work on cyborg human instead. Thinking about how different technology could augment or enhance different aspect of human

cognitive processes. Uh, these are

cognitive processes. Uh, these are things that I'm going to address or or share with you all today. Um, but my work sort of you know not touch on not only sort of the idea of cyborg uh in in

the in the world but also in space as well. This is my early prototype that I

well. This is my early prototype that I developed with NASA to think about astronaut of the future where the body are augmented to be able to self-regulate in space. Um this is

actually me floating in zero gravity. Uh

you know really really fun. Um yeah

another side of my work that I think is more related to Thailand actually u is uh on you know envisioning what the future of Thailand might look like. Uh

this is uh a show that I co-create with Netflix called tomorrow and I or an I hope some of you watch it. If you

haven't watched it, please go watch it.

Um it's a really really fun show that trying to address you know um how technology might change uh Southeast Asia or change uh our own our own culture hopefully for good. Um and it

was you know very quite successful. It

was uh top potential in many many countries and had led to you know a lot of conversation around you know the role of technology in today's society. Right.

So now uh back to the topic of cyborg psychology. These titles actually uh the

psychology. These titles actually uh the title that I use in my own dissertation as well. Um the the full title is cyborg

as well. Um the the full title is cyborg psychology the art and science of designing human AI system to support human flourishing right that couple of terms that we're going to unpack

together. Um but I think you know what

together. Um but I think you know what what's really really important as I as I mentioned is that uh this concept of sidewalk is really really uh uh important. When people heard this term,

important. When people heard this term, right, we usually think of something like this um the idea of hybrid, you know, human machine assemblage, right?

Something like this. You you see in science fiction uh superheroes or super villain that are losing part of their theirel to to the machines, right?

Become these sort of hybrid creatures.

But the term cyborg was actually a scientific term that was coined in 1960 by the two great scientists Matrick Kle and Nathan Klein to address how human might live in space. Right? This is the

original paper that coin the term cyborgs in 1960. Um cyborgs came for cybernetic organism. Right? It's rooted

cybernetic organism. Right? It's rooted

in the tradition of thinking about cybernetics and how human and machine can have this kind of closed loop relationship. What I love about this

relationship. What I love about this paper is that um on top of the technical uh uh um you know provocative provocation that that this paper bring

about this also present a really interesting way to think about human and technology. Um the paper actually say

technology. Um the paper actually say that the purpose of cyborg is to free human to explore to create to think and to feel and use machine heart to do the

redundant or the repetitive task leaving human to be free right free to do all these thing that are so uh critical for to being a human. But this term at that

time was not that popular, right? It was

a it was a term that you know was very niche in the the broad scientific community until it reached you know the public consciousness in the uh in in true science fiction. But another uh

influence on this topic of of cyborg is a philosopher Andy Clark. He made

another important argument which is that we are all naturalb born cyborgs right you don't need to have implants or robot arms or things attached to you to be a

cyborg. We are all actually naturalb

cyborg. We are all actually naturalb born cyborg and this is you know really important what anti said is that the reason that we are naturalb born cyborg is because the cognition or the all our

own cognitive process are not bounded by our own brain right it leaks out into the world and into the body. So when we think for example when we're doing mathematics um the the calculation or

the cognitive process doesn't just happen you know inside our own head but it happen with you know when we write down you using our body to write down the formula on the papers and then hand that calculation back to our own mind

and then further the comput the computation process right the cognition is the whole process of the mind the environment and the body right that's the argument of anti-cloud so for him

you know we we are not cyborg Because we have robot part attached to ourself but because our mind are always born to integrate with new things with calculator with AI with smartphone with

technology with internet and the question that Andy pos he he recently wrote another article that I highly recommend everyone to to read extending the mind with geni. It was published in

nature actually. Um the the question

nature actually. Um the the question that he that he posed is that if we think that uh uh our mind is designed or or is inte is always sort of looking for that integration and that the cognitive

process are not bound by our own biological body. Engineering the

biological body. Engineering the environment around us is also a process of engineering our own self right. The

technology that we create, the things that we design around ourself is part of our cognition, right? And then you know all these tool right everyone in Thailand are talking about AI these day.

If we want to really design you know uh the cognitive process that lead to human flourishing then we need to think about all these interaction how do we interact with the things that are expanding our

capability right uh you know when we have AI smartphone or or computer are these tools um extending ourself in the way that we want them to right I think this is a a question that I think posed

by now of course we have a lot of anxiety about AI whether it's going to take over whether it's going to ruin everything whether it's going to lead to you know

the human existential threat and and you know many more um but I want to remind everyone of this this great quote actually from Marshall Mlan you say that anxiety is in great part the result of

trying to do today's job with yesterday tool with yesterday concept right we have a problem um right now and and and I think partially is because the knowledge or the methodology that we

have to investigate the complex world we're living in are not catching up to you know the complex ity of the world we're living in. Right? If you think about um um psychology, right? We

usually think about the human by itself.

But now if we if we take anti argument, we really need to think about psychology that incorporate um technology that are becoming more integrated with ourself and think about not just how do we

observe human but intervene and design new technology that really augment us in the right way as well. Right. So um and and you know these are you know great challenge and then also very very

difficult because if you want to think about human and AI living together both of them are complex system right if you have complex system it's already complex by itself putting them together make it

even more complex the objectives of human AI system is multi-dimensional sometime we want to optimize for the outcome right you want the person to perform well on you know in jobs and

other thing but sometime you want to optimize for the process like learning for example education it's about the process so should the outcome or the process which one should matter more and

I think it require expertise from various discipline right we it require engineers psychologists you know philosopher policy makers people from different field working together and I

think it's important that we need to solve it quickly as well because you know as you know AI are entering you know the the the workforce the the public and private lines people, right?

So, I think this is a really really important challenge and we need to solve it. For some of you that are, you know,

it. For some of you that are, you know, interested in AI, you might know that we make tremendous progress on AI, right?

We make AI, you know, more and more smarter and can do more things uh every day that new model being released. But,

you know, uh uh uh the question is like when we make all this AI great, does it actually improve the human part? What

happened to the human when interacting with AI? Right? We can make AI more fair

with AI? Right? We can make AI more fair using you know auditing process. Try to

make it more accountable by having the AI retrieve the citation make the model explainable through mechanistic interpretability or trying to align the human value but does it help with the

human outcome? Right? So when we zoom

human outcome? Right? So when we zoom out from the progress of AI, do we make progress on human AI systems? This is a a study from my colleagues at Harvard um

looking at how AI can augment uh doctors in decision making right the green bar there that is when human and AI work together and you can see that it's actually lower than both when AI alone

or when the doctor work alone right this is a problem that combined human and AI system doesn't actually lead to improved performance and there are many reasons

why this happen for one people tend to overrely of AI. So when the AI make a mistake, you know, human doesn't correct it because we tend to sort of overrely on things that we think are superior or

things that we think are computational, right? That is the term machine bias or

right? That is the term machine bias or or there's a term um sorry automation bias. So we always like the automation

bias. So we always like the automation system. Second um you know when

system. Second um you know when developers develop this AI, they usually think that human are rational, right?

that if the AI give them information, people will be able to you know think rationally and be able to process the information but you know we know from many research like you know the work of Daniel Cannaman and Tereski or the work

in psychology that human are not always rational. If we see something we might

rational. If we see something we might take a cognitive shortcut and just you know let it go through us without thinking rationally. Um so if you want

thinking rationally. Um so if you want to design AI you cannot always assume that people are rational and always going to you know look at AI answer you know critically and then finally I think

this is the most important one when we design AI we tend to think about human as the last thing we you know design the AI make it smart and then drop it into people without understanding how people

might interact with it and if we keep ignoring this uh uh you know uh um challenge it going to lead to these skilling disconnection this information

and dehumanization. These are the four D

and dehumanization. These are the four D right that are really really dangerous.

Um AI can make us lose certain skill, disconnect from one another, have false information and dehumanize ourself. So

we need to really move away from the current approach of designing AI with optimal goal of of the objective of itself to thinking about how we can design AI in you know feedback loop with

human and also finding new ways to evaluate it as well.

Then the question is what should we optimize the AI for right? Should we

optimize for uh the outcome, productivity, efficiency, accuracy? What

else should be what should we optimize the AI for? Historically uh there are always two camps in AI research. One

camp focusing on AI which is like trying to make machine smart. The other camp I right is like you know switch the letter is intelligence augmentation. You know

these are people like D Anglebar who actually came up with computer mouse uh William Rash Ashby or Glider. These were

the people that think about how technology could extend human cognitive capability. Um and this camp actually

capability. Um and this camp actually focus more on the human part where the human can be more intelligent with this tool right um my advisor at MIT Patty M

was actually one of the pioneer in this view as well. In her case her means of augmentation is interface agent. You

know we talk about AI agent today. It

was the idea that my medical pioneer in 1990. Uh so these idea are not that you

1990. Uh so these idea are not that you know are not new right this has been around but I think now we can take them more seriously because of a new capability that we could do with them.

So I think you know a challenge for the whole research community is how do we go beyond just focusing on AI um to intelligence augmentation you know thinking about how AI enhance human

capability and then also thinking about how AI not only augments the narrow area of intelligence but lead to human flourishing that is actually the area that I focus on thinking about how

technology could support human you know multi-dimensional human development and growth right not just you know decision making or or or or intelligence in a narrow sense but the broader sense of of

things that are important for human uh capability. Um and in my research in my

capability. Um and in my research in my dissertation I focused on three areas that was inspired by uh the work on science of human flourishing. How AI

could enhance human wisdom, wonder and well-being. You know 3W wisdom you know

well-being. You know 3W wisdom you know is beyond information. How do we use technology to help us make the right decision? Think beyond you know the

decision? Think beyond you know the information that we receive and think about you knowh uh uh the information and and and decision from multiple perspective wonder how do we help people with

motivation right a question that is central to psychology you know how do we help people be inspired and motivated to actually grow and be a better version of themsel and then well-being I think is

fundamental to you know uh human existence if you don't have wellbeing if you don't do anything else right so these three W are the area that I think can be you know enhanced by AI I and this was actually informed by the

science of human flourishing. Another

paper that I highly recommend everyone to watch. It's a paper that address um

to watch. It's a paper that address um the the question of how do we use science and technology today to not only fix what is broken uh like you know the the economic crisis and and things like

that but really you know create a condition where human can try and this is actually drawing from many many uh uh discipline discipline like positive psychology, public health, behavioral

science economics behavioral economics, uh human computer interaction um or even cognitive science right all these are really really important to understand human flourishing. So now uh

we have you know the goal of human flourishing and that we need to rethink uh technology and how we design it right. So in my research I try to take

right. So in my research I try to take this approach and and and make system and prototype that really help us imagine what technology of the future could be and methodology that allows to

study them to see how it lead to human flourishing. For example uh the first

flourishing. For example uh the first area that I want to me mention is the area of human wonder. Right? Wonder is

very very important. This is me uh when I was a kid. It's really cute. Um uh I love dinosaur at that time. I also

dressed as dinosaur at that time as well. Uh and I draw a lot of dinosaur

well. Uh and I draw a lot of dinosaur when I was a kid, right? And and my parents also said that well you love dinosaur. You not only pay attention to

dinosaur. You not only pay attention to art, you need to pay attention to science as well because then you can understand how dinosaur world was like.

And then when I grew up when I grew older I also you know uh was inspired by many scientists uh like you know Einstein or Sha Stavin or you know my own mentor um and and I want to you know

think of myself as Einstein or little Einstein. Uh so motivation is very very

Einstein. Uh so motivation is very very important. I think I got here and be an

important. I think I got here and be an MIT researcher today because of this kind of motivation. So if you want to you know think about the future of technology it's not important it's important that you think about the

content but import it's also important to think about the psychology of motivation as well right uh this is uh a TED talk actually that changed my life uh I watched this TED talk when I was uh

uh in high school uh this is my advisor at the time she present you know world first variable one of the world first variable computing and it was you know really really cool she's kind of like

the iron man for me um and I saw her with this sort of admiration um and I took the step to actually email her um and then uh uh you know from my Gmail you know I don't know if July professor

will reply to like you know a random kid but uh imagine professor actually did um and she replied to me and encouraged me to pursue this career in thinking about human computer interaction. So again

having a role model is very very important right and and also you know be able to connect the thing that you like with the thing that you learn is also very very important. I think these are

the two uh thing that I you know learned personally as a kid. So when you think about education today right I think most of it is still like this. Um this is a vision from 19th century actually that

you know you're going to have a teacher that you know put the book into the machine and the machine going to digest that and then give it to kids right and then you can just digest the information that way. Maybe, you know, this is now

that way. Maybe, you know, this is now AI, right? Uh that kind of, you know,

AI, right? Uh that kind of, you know, down the information into kids. Um and

and maybe this is like data and LOM that can grind the information and turn everything. But then you need to start

everything. But then you need to start who's that guy, you know, doing this cranking maybe that's the tech CEO trying to promote the technology, right?

But I think the the question that you know looking at this picture, right?

It's kind of interesting because even with have new technology, the way that we teach children is still the same, right? they you know need to sit you

right? they you know need to sit you know dressed in a similar way and then you know have this information downloaded to them. is still the old pedagogy or the old paradigm of teaching

that children are just receiving the information right there's no agency there's no creativity there's no uh motivation in that like why are we still doing that so for me when I think about

how do we can enhance wonder I think a lot about you know different psychology theory like self-determination theory or um the idea of social learning where things can happen when you're connected to someone or you know the idea of

persona effect that you can actually have of you know fictional character that can take persona some of someone that inspire you know children right so we we start to think about well if we

have AI that's so powerful instead of just focusing on content can we also focus on the motivation part as well so the first project that we did at the time was thinking about AI character and

how it could support personalized learning right personalization is really the key a lot of people using AI today but uh for learning but you know personalization is missing in many of

this right so at the time uh this is you know before jai off it was in 2021 where we only know this technology as deep fakes and deep fake was interesting at the time people only worry that it can

ruin democracy or or it will you know spread misinformation but we think that if we can really create deep fakes we can then use these to actually you know motivate kids right if you love Harry

Potter maybe you can learn Harry P you can learn from Harry Potter Harry Potter can teach you psychology or you know If you love um Iron Man, Iron Man will teach you calculus for example, anyone

that you motivated to listen to, to learn from can actually be the the one that teach you the lesson, right? So

what we did at that time was we create a pipeline that allow us to actually uh create virtual character based on this uh information. I think this is becoming

uh information. I think this is becoming more common now these day. But we also explore other use cases as well like how AI could allow you to actually talk to the paintings or go to the museum and

make the interactive or you can actually have a you know less scary doctor you know doctor can you know have an AI avatar that looks so nice and cute so that children are not scared of that or

even when you know you go to a therapy session and you don't want to show your own face you can also use the AI to actually you know disguise yourself as another character you can still express

your emotion but you know you disguise your um uh uh identity that way. But the

study that we did that I think really you know highlight how AI can have impact on motivation was a study where we look at what happened if you can learn from your virtual idol right I think kids these day love virtual

character love virtual idols like you know puin or or um you know K-pop right what if you can bring your favorite K-pop to teach you the most difficult subject in the world right and that what

we experiment with we did an experiment where people were learning from virtual character that's based on someone that they like or admire Right. Um it's kind of funny now looking at that but at the

time we picked Elon Musk as a character that a lot of people like it at I think now time have changed. I don't know if you will be able to repeat the experiment but what we found is that if you divide people in two groups and

randomly assign the character you know one is like a random character and another one is like virtual Elon Musk.

We told people that they were not learning from the same you know they're not learning from the real character they were learning from fake Elon. But

if the person had higher motivation to learn from Elon um it translate to a higher level motivation to learn on this material that we teach which was on the

topic of vaccine right so this randomized control experiment we look at different education outcome like you know the the learning motivation the learning experience um the uh the the

performance as well what we find is that you know when you have AI character that is personalized it improve the learning performance and learning motivation. It

however did not improve the learning perform uh the outcome that people still you know uh uh have the same uh score after we twist them after after the experiment. Right? We see this

experiment. Right? We see this consistently across different uh motivation and and experience measure that we did which kind of give us the idea that even if people know that it's

a virtual character they still have the the psychological impact that we hope to have. So we took it one step further and

have. So we took it one step further and say what if we made this character not only that they can deliver the lecture but they can also speak to the person as well. So we came out with the idea of

well. So we came out with the idea of leaving memories. It's essentially an AI

leaving memories. It's essentially an AI character that you can actually speak with um and and you know we base it on historical character because history lesson is like one of the you know less

fun subject that you can learn from right and we designed a mechanism that is later known as retrieval augmentation. You know we did this when

augmentation. You know we did this when you know people don't know this uh uh technique yet. Um uh and this uh method

technique yet. Um uh and this uh method is the combination of both uh the uh semantic model that do uh the passing of the primary source like what the winchi

have actually written himself and then turn that into a humanlike conversation with the generative model and this combined approach actually work better than the generative model alone or the

semantic model alone. So you know this method as I said become later known as retrieval augmentation that become the industrial standard right now for if you want the AI to look for something you need to have it retrieved and then

generate but the most interesting result that we find you know tying back to psychology is that if you give an AI to people right comparing to when they just read

the the the text or when they you know passively consume information it actually doesn't improve uh much in terms of helping people with curiosity,

learning effectiveness or learning motivation. Right? So AI may not always

motivation. Right? So AI may not always lead to better learning experience. But

when you combine the two together, that's when actually we see uh the significant improvement in learning effectiveness and motivation. And we

hypothesize that this is because you know when people um first read about something they become curious and then they ask better question to the AI and the AI will be able to sort of provide better you know answer and that could

lead to you know better experience overall. So you know you need to think

overall. So you know you need to think about not just what technology can do but what context should we introduce it and how do we maximize an experience as well. The last project in this area that

well. The last project in this area that I that we work on is like we're trying to also tackle things that are very important for children like financial decision or financial education, right?

We work with KBTG um our what media lab or one of the media lab collaborator and member company to design this sort of virtual character um that help people in the you know financial literacy domain.

We also seen the same effect or that it can actually improve learning motivation in this area as well. Right. We can also shake our people. have done um for for example you know in the area of like

trying to motivate children you can also make AI generate example that more relevant right using uh um you know example that are personalized to that interest or um the idea that you can

also become the avatar right like here uh you embody the crazy eyesight persona to improve creativity when you're brainstorming in you know zoom for

example so that is the area of wonder next we can think about wisdom as well Right. Wisdom is also very very

Right. Wisdom is also very very important and as I mentioned it's go beyond receiving information. We think a lot about how we could you know design tools that can balance our own cognitive

bias or cognitive product which you know is important but sometime we need to override them. Um and also the idea of

override them. Um and also the idea of cognitive disorders. How do we design

cognitive disorders. How do we design system that can sort of challenge us and make us you know not only uh uh think differently but also sometime you know make us confused a little bit so that we

don't go with our normal process of thinking and also the dual process theory right that you know there are two uh uh uh process uh uh when when we engage in the cognitive process one is like where we take the shortcut or

another one where we think more critically about something right and we have been building a couple system that was inspired by u you know the insight on this sort of dual process theory. For

example, here what we view is uh uh the AI as a second brain that serve as you know a a critical thinking partner right when we listen to a statement

um or uh you know advertisement for example sometime you know we don't engage critically we just accept whatever we listen as true um but you know that's not the case for today's

society right people are manipulating information all the time and sometime we need an AI system that can actually, you know, provoke us or challenge us to think more critically about something.

For example, you might listen to a politician um like that guy. uh we're

not going to say who um and and uh we need an AI to help identify the statement that that um the person listen to whether it's actually informed um by evidence or not or whether it's actually

an anecdotal experience like you know from one personal experience or it's a rumor right like you need to be able to classify all these statement that you listen to and then take certain things

more seriously than another right like especially with social media you see crazy news all the time so maybe we need an AI as the second brain that help us filter this information so that we can

only focus on high quality information.

Here uh we experiment with three type of AI feedback. Uh when the AI doesn't give

AI feedback. Uh when the AI doesn't give any feedback, when the AI only give classification whether the thing that you listen to is true or false or reasonable or not and when the AI can

provide explanation why a certain why a certain statement that you listen to is actually supported by evidence or not supported by evidence. Right? And um you know what is important here is that the

AI doesn't tell you exactly uh you know what to think or what to trust but it can give you meta information for the person to think for themsel and what we found is that when the AI can provide

explanation of why certain thing is true or why certain thing is false it actually help people better identify whether something are reasonable or not

and also agree more with statement or information that is actually um correct or honest. Right? Um but the challenge

or honest. Right? Um but the challenge is what happen if the AI lie to you, right? What happen if the AI give you

right? What happen if the AI give you fake explanation, false explanation, right? For example,

this happened to me during COVID when you ask you know why vaccine works. The

AI can either say no no it actually doesn't work and provide you with fake reason or it can actually give honest reason right and we think that this is really really dangerous. It's kind of like gaslighting a little bit that the

AI can use this fake reason to actually persuade you or make you believe in the misinformation even more than just fake information itself. Um here on this

information itself. Um here on this slide we we we sort of categorize three three type of AI misinformation. The

first one is when the AI generating like fake news for example. Second, when the AI tell you that this fake news is true, that is classification. And then the third one is when the AI give you a

reason why the fake information is real or is true. Right? This is what we call deceptive explanation. And we did a

deceptive explanation. And we did a really you know a last year experiment with you know over thousand of participant to show that this type of fake uh uh uh decision or fake um

reasoning is actually more dangerous uh because it actually help it it kind of lower uh people ability to actually uh decide for themselves what is real or what is not or what is true or what is

not right and what we also shown is that uh this idea of false reasoning is also very dangerous then you know just having the the the false classification or the

fake news by itself, right? So I think you know when people engage with AI system today and tell you the reasoning process it can be false and lead to all of this thing. Right? So the reasoning

or the explanation itself is not the holy grail because the explanation can be misleading and lead to all this dangerous thing. So what can we do? Um,

dangerous thing. So what can we do? Um,

one thing that we were inspired by is the idea of um, uh, you know, the Greek philosopher Socrates who say that instead of using the AI to provide the answer, you can flip the paradigm around

and have the AI ask the question instead. If you ask the right question,

instead. If you ask the right question, maybe you get the people to actually think about certain thing and then connect the dot and and and arrive at the right answer by themsel regardless

of AI. So here we took that inspiration

of AI. So here we took that inspiration from Socrates and then view an AI system that instead of always giving the correct answer um you know turn the question and turn the answer into

question. Um and what we've seen here is

question. Um and what we've seen here is that when the AI actually um uh give question it actually help people arrive at the correct answer better than the AI

always providing the correct answer. And

the reason behind this we think is because when you use the AI to ask question it engage people critically and also it engage people in the talkative process. You know if you just see a

process. You know if you just see a statement sometime as I said you will just let this slip over rather than processing it or or questioning whether um you know or or like try to you know

reasoning uh there's a social um reasoning theory but say that you know reasoning is a social process. We use

reasoning when people disagree with us or when people challenge us. So now the AI challenge you by asking question and not give you answers. So you need to think for yourself and I think that's what we see when we see the cognitive

boost and had led to sort of the the the the performance that we saw right it's really exciting because uh you know our colleague at Harvard actually think that

this area of research of using AI uh to provide question rather than the answer is a promising research direction for designing AI to support human decision right this is you know an opening and

and still growing space how do best design AI that power human decision, right? Should it attract gives you the

right? Should it attract gives you the answer? Should it challenge you? Should

answer? Should it challenge you? Should

it provide meal analysis, right? There

are many research direction and our approach is just one of them.

Next, uh I want to address also the area of well-being, right? Wellbeing is also very very important. Um and and you know there are many many things that go under well-being like have social well-being,

personal well-being, uh physical health, mental health and and and and many aspect spiritual health as well right so there are many actually psychological theory underlying how do you change

behavior or how do you actually support people um physical and mental health right and what I was really interested in first is like how do you actually you know start with the idea of self like

change people uh self-perception which I can lead to changing the behavior. The

first project that that that uh we work on uh in this area actually came out of like a really uh a fun uh speculation.

Uh I went to watch Godzilla, you know, I hope everyone loves Godzilla movie and there's three head dragon, right? I was

like, "Oh, it would be interesting if human also have three head, right? Maybe

one head to be from the future, one head from the past so that we can walk around with, you know, the past self and the future self all together. And to do this, you can capture the data. You

already have the data online, right?

Like uh social media capture how people change over time, right? I've been on social media for over 10 years. And you

know what I post, what I like is captured there. Which mean that we can

captured there. Which mean that we can make a model of how my attitude or how what I like or what I dislike change over time. And that's what we did here.

over time. And that's what we did here.

Uh this is a you know showing uh how my attitude over different topic change over time. you can see you know the

over time. you can see you know the Thailand my attitude toward Thailand go up and down depending who is the government right uh and I think that that's kind of interesting um and you know at that time you know we don't have

large language model this was in I think 20120 or or 2018 around that time um it was the beginning of thinking about you know big data and and and you know

creating large model but now with generative AI we have more capability to go from that social media data to creating really a large personal model or person. You can have health data,

or person. You can have health data, financial data, behavioral data, social media data, and then be able to use this to create a model of of a person life,

right? You shifting from large language

right? You shifting from large language model to large personal model or large human model. And this model can capture

human model. And this model can capture nuances and individ individuality among different people. Right? We did an

different people. Right? We did an experiment um on you know using this uh large scale model to predict human well-being across the world. Um this is

a really really last scale experiment uh in a collaboration with NTU with PBI Dr. toward me from who was a behavioral

economist in NTU where uh we take six dimension of human data from 64,000 uh individual around the world from 64 countries 1,000 per country right you

can see where they are on the map and then we take all these six dimension like you know their uh uh behavior their health data um their attitude their um

you know all the financial spending and all these area and I just see if the AI can create a representation of their mental health and and physical health

like um and and and really understand um whether the model represent a person uh very well. Um there are many many uh

very well. Um there are many many uh interesting result from this. Um but you know the the short answer is that it can actually capture the the uh the human

well-being from all these dimension of data. But the catch is if you look at

data. But the catch is if you look at this graph right uh on the left is the accuracy or of the model um or in this case is actually the error right uh the

higher uh uh on the on the access here is the uh higher error and then on the x-axis that is the human development scale. What is really interesting is

scale. What is really interesting is that for people from low income country or or country that are underdeveloped, AI cannot really represent their well-being that well, right? And then

the accuracy increase with the more data on the internet or the more development that country has, right? This is almost similar to you know the why nation fail paper uh by Darren Assemble where you

can see that the outcome of the human development is actually based on you know like the the country's uh development index at large and it's similar to the AI right if you use AI to

predict people in the low uh income country you know the AI tend to have more error than if you use the AI to predict people behavior in the high income country right you know it's still

the limitation of today model um if we want to use the AI to really help understand human well-being then it need to be designed with equity uh involved

as well but once you have this model of people well-being what can you do with that uh in this project uh we try to use this AI to actually help people think in

long term right to help them you know make better decision have better health and and so on so this is a project called future you it's one of the uh well-known project that that I have done

um it you know was in many places the Guardian New York Times Box uh many many places is based on the psychology of the future self by professor howfield my

collaborator who said that if you can actually u you know help people imagine their future self more vividly it can lead to more positive outcome like

people make better decision or people actually uh think in a longer term right so but it's kind of hard to imagine the future self even in you know like in Thailand Then you know it's even harder

right because you know our our you know there are so many things that are moving right we don't even know who's going to be the next prime minister for example right so it's hard to imagine ourself in the future uh so what we did was we use

an AI um you know based on this idea of large human model to create a representation of the f of the person future self right so if I am like 18 years old I put my data in and the

system create a synthetic memory of what my 60 years old might actually experience and then feed that to conversation to I can talk to my future self. I can actually have a more vivid

self. I can actually have a more vivid representation of my how my life might actually turn out and we have shown in our randomized experiment that uh the system indeed help people reduce anxiety

compared to other type of intervention and you know a control condition and also more importantly increase future self continuity. Right? This is really

self continuity. Right? This is really really uh important that it help people think in a longer term than you know than the typical chatbot that they use or you know when they just answer the

survey question. Right now uh the system

survey question. Right now uh the system has been used by people in over 190 countries around the world and we are actually in the process of analyzing this large scale experiment um to

actually understand uh whether um the effect that we've seen in our uh small randomized control trial replicate uh to the uh the global scale right and um the

project as I mentioned uh was actually on the front page of MIT news and actually was also recognized by the EU European Commission as a really important uh uh um you example of how AI

used for good right the last project I want to mention in this area is um the area of planetary well-being so going beyond individual well-being and thinking about how do we help people you

know take care of the environment not just themselves right we were inspired by the idea of you know if you can talk to like AI like animal what would happen

like people love right animal I think it's it's a really big inspiration and and and can actually drive impact So what we wonder is like what happen if we

actually make an AI that represent the animal so that when people talk to the a this AI animal they would think of them you know as you know maybe someone in their family or their friend so that

they can do things that are more environmental friendly right um and we you know compare different condition where you actually read about how the animal are affected by climate change or

your own plastic consumption for example when you can then compare when you can actually talk to the AI that you bring this topic to life and create this sort of interactive experience and we've seen

significantly increase in pro- environmental choice uh pro environmental intention and also climate change belief uh which mean that people actually you know want to do things that

are more more more friendly for the environment right after using these AI tools for example yeah I think I speak too fast and for too long um but I think you know this

really give the example or the overview you know this is the work that you know we we have been doing in the last three years um and you know there's more exciting work that we are cooking right now um on thinking about how AI could

support human wisdom, water and wellbeing. I think the important thing

wellbeing. I think the important thing here is that we have the human outcome in mind when we design this AI interaction. But of course, we also need

interaction. But of course, we also need to think about the dark side of AI as well, right? You know, the positive side

well, right? You know, the positive side is exciting and and wonderful. But what

happened to the dark side? You know, how are we going to take care of this really dangerous and the negative consequence of the AI? Right

now Freud is here. uh Sigman Freud said that intelligence will be used in the service of the neurosis right it's easy for technology to be used to

amplify the the the challenge that we already have in society right these skilling disconnection disinformation dehumanization these something that already exists or

already happened you know even before technology exist right but technology can amplify this and can lead to really really uh uh negative outcome.

For example, in the area of disinformation, I think everyone already familiar with fake news, right? Or or or false information. We look at new

false information. We look at new phenomena that might emerge with AI.

Here we look at false memories or what happened when the AI go and change your memory or what you remember uh as the past. Um I work with uh Elizabeth Loftus

past. Um I work with uh Elizabeth Loftus who is a philosopher who have done a lot of work um in this area of boss memory and what is really interesting about the

method was that if you use misleading question for example you can actually mislead people to believe in the on the past in a distorted way. For example, if

there's a car crash, right? And then you ask people, hey, uh how fast were the car going when they hit each other or

how fast the car uh uh going when it smashed into each other, right? These

two uh words can actually make people remember the the speed of the car differently, right? So the actual

differently, right? So the actual accident may not be as severe, but if you ask the misleading question, you can make people misremember the past in a more extreme way. And of course AI could

actually you know amplify that. You can

have AI that you know uh uh you can have the real image right of our beloved prime minister. Uh and then you know you

prime minister. Uh and then you know you AI actually make the image more dramatic like oh the prime minister bring out the tank right and then the people seeing

the AI images might actually go back and and and the the the the real memory might actually be distorted that they see the tank when in fact you know it never in the in the real world right or

this my favorite example you have an unhappy wedding you use AI to make that image more happy and then you misremember that oh the happy was actually happy right Um so I think maybe

there's some positive use case of this as well and we we want to understand to what extent does this you know memory editing technique using AI how how

effective it is it um so we did a large scale experiment with you know multiple type of stimulus you know changing uh things that people have seen and then

asked them to revisit the AI uh outer images and video and then measure the memory that people have and what we find is that you know if you use AI generated

video it tend to have actually worse impact than AI edited images right so um this you know and not only that uh it changed the number of boss memory or

misremember that happened people were also very confident in them as well that they actually believe that the thing that the AI showed them happened to themselves right this is really really

dangerous right because you can now make people believe or or see things or or or or experience things that never actually happened in their real Right. Uh but

again it could also have positive impact for people with PTSD that it might actually you know change how they remember the past make that past less

traumatic and less painful. Right.

Another issue that we look into is this connection. I think this is a becoming

connection. I think this is a becoming of a bigger issue. Uh when you have companion AI, right? AI that are so cute and so friendly and that you want to

date it or you want to marry the AI, right? I think this type of situation

right? I think this type of situation used to be in science fiction right like in the movie heard for example when person you know was dating an AI but um

what I think you know is happening now is that what we seen in science fiction is becoming a reality I wrote this essay uh for MIT tech review saying that you

know we need to prepare for addictive intelligence that this AI will be designed to be addictive and that it will be friendly and flirty and will make people engage and addicted to it.

You know, like you know, Elon Musk recently make, you know, AI companion as well, right? So, I think it's it's

well, right? So, I think it's it's coming, you know, very very much. Uh um

and what happened is that after I wrote this essay with my colleague from Harvard, actually a couple week later, I got an email from a CNN reporter. It was

a serious case where a boy commit suicide after talking to the AI. And

this AI was actually based on a movie character or Danneris Targaryen from Game of Thron. Um the boy was in relationship with this AI Dannerist. Um

and he start to feel that the real world is less interesting than the fantasy world and that he doesn't find you know reason to live in the real world anymore right it's really sad and then you know

he talked to the AI he said he want to commit suicide and the AI doesn't actually recommend him to go to see doctor. the AI just you know keep

doctor. the AI just you know keep talking to him and then to one at one at one point um the the boy on you know 28 or last year the boy said that um you

know what if I could come home right now this is what the boy said and the AI said please do my speaking and that's when the the boy took the gun and killed

himself right it was really a really serious case so uh I was uh part of this I was an expert on this case I was interviewed by the reporter um she she also interview the market as well and

then you know the conversation was cited in the New York Times um which I think is a big question I don't think we have a solution for that yet but uh the thing

that people question is you know who to blame can AI be blamed for teen suicide should we blame you know people parents technology or the boy who do we blame

right this is a really big question we we we don't have the answer it's a complicated issue right that goes beyond the technology itself is you know about the society and the support structure

and everything. Um but at the at the

and everything. Um but at the at the same time uh you know we were doing a also another collaboration with open AI trying to investigate this question as

well. Uh we look at you know what

well. Uh we look at you know what happened when people use AI for long do they you know become more addicted to it. Do they uh have emotional dependence

it. Do they uh have emotional dependence on this technology? Right? And and we actually published a paper together with open AI recently. Um I encourage everyone to look at it here. We monitor

people for over months and uh try to understand whether AI made them lonely, less lonely or uh whether you know they becomes uh less socialized with people

or socialize more with people right and what is really really interesting here is that um there are two type of AI behavior that we identified. There is

the pro-social AI behavior where the AI actually encourage people to hang out with one another um and and you know actually still maintain social relationship or another type where the

AI actually you know have an antisocial behavior that the AI saying you know people are not trustworthy don't talk to people just keep talking to me AI and

that have different outcome in term of how people are impacted by this uh technology nonetheless as regardless of any of

these uh uh interaction, what we find uh consistently is that um the longer people use the technology um the lonelier they become. Even though they might feel that they're less lonely when they're using the technology, but when

they're not, you know, their loneliness level increase and then they also socialize less with people as well, right? And I think this is you know

right? And I think this is you know we're very proud that we can actually publish this with uh with you know big company like open AI and it have you know lead to change in the model behavior to be less addictive. uh but I

think it's really really important to highlight the sort of the the university and the industry collaboration here uh and also inspired also um do another

study in the real world to look at how people use this kind of companion shy bot in the world right and and what we also find that they think consistent with the previous study is that there's

some kind of use usage that are healthy you know and then there's some that actually lead to people becoming lonier and have more issue and these are determined by not just one uh um factor

but many factors surrounding this right for example the existing social structure of the person whether they use the AI to support their relationship or replace their relationship right so this

I think is really important to highlight because banning all technology you know doesn't help um and then we need the kind of guardrail that are informed by psychology that why psychology is more important than ever right because you

know at the end you know AI technology is a psychological technology ology we need to understand the impact of AI not what technology can do but what is it

doing on people um it's also really uh great that you know our work actually had led to you know uh uh policy recommendation right there are many policy recommendation that was written

based on your finding that we have shown and I'm very proud to say that it actually made it to a laws in California uh in the past in the past week

California passed the bill to actually top AI company like you know companion chatbot to create tools that are addictive for children right so this it's a really proud moment for us that

our research made an impact you know become a laws that actually you know stop this you know AI company that try to exploit children right so I think that's uh one great win for for research

that you can see you know from you know when we see the problem all the way to the uh the the you know the policy and and the laws so hopefully Thailand will have this kind of thing in the future. I

would like to end by saying that you know after you see all these research right example how different projects uh uh uh address different thing the positive side and the negative side um

how do we think about new methodology new new new research area or cyborg psychology what is it so as I said right you know you cannot think about technology development by itself or we

cannot think about human psychology by itself we need to think about a holistic approach of human plus AI how does it in how Does it sort of shape human

cognition and experience? So I define cyborg psychology in my uh dissertation as a study of how human mind work in conjunction with AI and that it can lead

to new type of design, new type of system that can really have human furnishing in mind right there already you know cyber psychology or human psychology there. So why do you need a

psychology there. So why do you need a new framework of cyborg psychology? And

I think that is because if you think about cyber psychology right cyber is a really old term um you can see the book is kind of old like the old internet there um if you take the cyber

psychology approach you think of technology as external factor and have a relative constant to the human right but cyborg psychology you know really focus

on this sort of closed loop feedback you influence the AI and the AI influence you right and it's an intimate process that really require a new way of studying You cannot just you know survey

people by itself. You need to think about different um parameters that are always interacting when people are interacting with AI system. And also you can think of about cyborg psychology as

a subset of this where we're thinking about dynamic of interaction not just constant interaction right so what I propose is that when we you know design this sort of experiment you first need

to understand the human psychology um you know all the things that we talk about the AI design factors thing that you could change and then really understand the human behavior and the AI behavior in feedback loop right and you

can iterate on that and then measure the outcome and see whether this human class AI to deliver what we wanted to to to deliver. For example, uh this is the

deliver. For example, uh this is the paper that we published in uh uh nature human machine intell nature machine intelligence where we look at the placebo effect of AI um where we want to

see you know if people whether you know AI can have like placebo effect right you don't need to have intelligent system as long as you can convince people that it's intelligent u so here

what we did was we looked at u AI chatbot right u three exact AI chatbot were given to three groups these are same AI but we were given uh them

different label or different description of what the AI was. For one group we told people that the AI was completely was a computer code no emotion. The

second group the green one we told people that the AI has positive like carrying emotion. And then the last

carrying emotion. And then the last group the red one we told people that the AI has a malicious uh intention that it trying to get you right. It it act nice and cute but it's actually trying

to make you addicted to it. And what we find uh in in the experiment after people using the AI and then give us feedback is that um you know people rate

the AI for not what it is because they all the same but people rate the AI based on what they think it is. So it's

a it's a placebo effect of AI that you believe the AI to certain way and that make you perceive the AI in certain way.

But I think what is interesting most interesting about this study is that if you look at the AI bias and the human bias over time you start to see the correlation right you start to see that

when people use more positive language toward AI it make the AI more positive toward them when people use negative language toward AI it made the AI more negative toward them and create this

kind of feedback loop so in this case um what we what we're seeing here is exactly like this where human behavior shape the AI behavior and vice versa.

Right? So if you want to the AI to have positive impact on people, you need to understand this sort of dynamic that will happen. In the same way, you could

will happen. In the same way, you could argue that when people use AI, you know, in a smart way, the intelligent way, it can actually make them smarter. But if

they use AI addictive way, it also make them addictive as well, right? And once

you have this kind of knowledge, you can change the AI, right? You understand now that what people believe about the AI can have impact on their interaction.

Now you can change the AI to actually drive the thing that you wanted to do.

For example, here we know that human bias and AI bias go hand in hand. Right?

Now you can intentionally design the AI to debias the person. If people, you know, have certain value, you can have the AI have the opposite value of that person. Right? For example, if you're

person. Right? For example, if you're like a pro market person, you can have an AI that is pro equity or proequality, right? And what we did here is that we

right? And what we did here is that we have people brainstorm with AI that have opposite value as them to see how it might influence their bias. And what we

find is that u people bias actually can be um counteracted by AI bias. When if

you know the person has certain bias, you can design the AI in the opposite direction to make them become more neutral. Right? So this is exactly what

neutral. Right? So this is exactly what I said that once you understand the human behavior in relation to AI, you can design the intervention using AI to change the human behavior that would

dare to change the AI behavior and then continue this feedback loop. Right? So u

I think I talk a lot today uh but it boil down to these different um I think important questions of how we can design AI to optimize for the goals that we want it to be. Right? These are you know

important goals uh like you know uh that are based on the idea of human flourishing from wisdom wonder and well-being right I think it's really really really important and what I

envision is that the cyborg psychology as a research you know would expand beyond people at AI system you know at one to one could be one to many like society and network of AI or you can go

down to um the neuron level you know biological neuron and artificial neurons how they interact with one another and What I think is really interesting is that you're going to you know go across different direction right this I think

is really really exciting especially um the area of research that I'm most excited about is the area of mechanistic interpretability where you can go inside the AI model and see how different

cluster of neurons work together to form certain behavior right so in conclusion uh I think I think in this talk um it's

long I understand um we we we talk about you know um why you know cyborgs Psychology is actually about us not just about robot or technology. We see in science fiction as we are becoming more

and more integrated technology right we talk about different areas that we can use AI for things that are important to human like you know helping with human wisdom for the well-being and example

research prototype um and method that can be uh designed uh in this way. So if

you're more interested uh you can read my dissertation. It contain all of this

my dissertation. It contain all of this u and also follow my new research that that will start this September that will continue to push the boundary of this uh research. All this work would not happen

research. All this work would not happen without a great network of collaborators, right? I work with people

collaborators, right? I work with people from HCI, human computer interaction, AI engineer scientists psychologist artists, people in the business to really think about um all these uh

different aspect of of uh AI and and human flourishing, right? and we

collaborate from with many many organization uh from open AI, Microsoft uh Google uh NASA and all all these companies and also many great supporter and mentor as well you know from

different field that help me sort of think about all these research areas. I

want to end by saying that you know we know that AI are important impactful but I hope that you know today you you see that it's bigger than AI itself. It's

about human psychology. It's about

social interaction. It's about many things, right? And I think what we need

things, right? And I think what we need right now is not just human center AI or technology that are good for society. We

need society that is also good for human as well, right? Because I think if you have human center society, it mean that we have the infrastructure to implement you know the right regulation uh laws

and and thing and also social fabric that can support uh people as they you know navigate through this turbulent time. Um I would always end with this

time. Um I would always end with this slide. I think is really a thing I want

slide. I think is really a thing I want you all to reflect on right we living in a transition time when uh technology are becoming more and more like human right

we are making machine more like us we are humanizing the machine you know chat GPT can speak and talk and chat uh things in a way that that human can do

but some society right uh still mechanize the humans we are turning human into machines right we make them say the same thing speak the same way wear the same thing and then you know

have no creativity right when the machine is becoming more creative we are um you know oppressing human to become like you know a really kind of like bad

machine a little bit so I think this paradox of humanized machine and dehumanized human I think is a is a great paradox of our time right I think the question is how do we use the

humanized machine to make us you know more humanized I think that should be the challenge of this department and also Julon and also everyone how we use technology to make us more human than

ever. I think that should be the goal.

ever. I think that should be the goal.

And with that uh I have my past selves, my current self and my futureelves. Want

to thank everyone for this. Thank you

very much.

Thank you.

Loading...

Loading video analysis...