Bitesize Ethics 2025 Session Two: Dr. Madeline G. Reinecke
By Uehiro Oxford Institute Channel
Summary
Topics Covered
- Full Video
Full Transcript
Uh, thank you Tis for for the introduction and also for the invitation to to be a part of this. Um, I'm I'm really excited when I when I heard that there was going to be a theme for this year. Like I couldn't be more over the
year. Like I couldn't be more over the moon with that because this is my favorite area and and this is one of my favorite topics. So I'm really excited
favorite topics. So I'm really excited to to start this conversation uh with all of you and and to be here for for the session today.
Oh, and I also forgot the the pun that came to mind when I was messaging Liz right before this that there was an opportunity to to revamp the title for
this year and call it bitsized ethics instead of bite-size. So anyway, you all can I was so ashamed we didn't think of that Gracie.
So we we'll say today's session can be for for bitsized ethics. Um and yeah, and again thank you thank you all for for the opportunity. So, I'd like to
start off the talk by asking you all to think for a moment about the most important relationship in your life, the most important person in your life, and and what that person's like. Why are
they so important to you? So, just take take a second and think of that person.
Now, maybe the person who comes to mind for you is is a close friend. Maybe it's
a family member, a romantic partner, or some other close other that that you've built a relationship with. And it seems like there's something special like I I
I would imagine that each one of us can think of like these core relationships, these really key people in our lives.
And there's something unique about those relationships. And I want us as we go
relationships. And I want us as we go through the talk today to think about, well, what is it that's special about those people? And is something lost when
those people? And is something lost when we think about those relationships being made virtual? Like what if anything
made virtual? Like what if anything changes when we go from the natural world human human the human human world the human human social landscape and we
think of instead about one that includes artificial beings.
So imagine now that we have this this set of individuals. So each one of these circles is a different person. And based
on what we know about these people, our expectations about how they'll interact with one another, how they might engage with one another will change. So say
they're all strangers. None of these individuals, none of these nodes know each other. But and then then this
each other. But and then then this suggests that they might be respectful like we would expect them to sort of abide by sort of typical stranger norms. They maybe are respectful of one
another. They mind personal boundaries.
another. They mind personal boundaries.
They're polite, things of that nature.
But maybe we know different information about them. Uh maybe instead the
about them. Uh maybe instead the majority of them are bite-sized attendees or bitsized attendees. And
instead uh and and I'm up there and I'm the top is the lecturer. Then we would have different sets of norms that we would apply to these agents. We would
expect different different kinds of behaviors. Like in this case, I'm giving
behaviors. Like in this case, I'm giving a lecture. That's not something we would
a lecture. That's not something we would expect in this case.
But alternatively, we can add even more nuance, more layers onto this. Like say
like we know something even further about these individ individuals. So
there's there's me at the top and then there's Liz. Liz and I are colleagues.
there's Liz. Liz and I are colleagues.
And so that comes with a certain set of norms that apply. Uh and maybe I didn't know that there was going to be someone here named Sam, but say that there's someone here named Sam Abby. And I
actually can't see what the other name was that I that I put on the screen because it's it's blocked. But say we have these three individuals and they're friends. And so they are um their
friends. And so they are um their relationship also uh has a set of of characteristic norms that are unique to to their sort of um relationships as
friends that don't apply in the colleague case. But say Liz and I don't
colleague case. But say Liz and I don't know this group of friends. And so and so you you can see how this picture can have a lot of nuance and can get complicated very quickly. We can even say like the bite-sized part of things
like applies sort of and and layers on top of this. And so this is just a a toy example to demonstrate how complex the human social world is. We have complex
social networks and with that comes complex social expectations. We wouldn't
expect the same of a good doctor like what we have up there in the top left as we would a a good professor or a good friend or a good parent. this just all
of these social rules come with with uh nuanced uh relational norms is what is the sort of the term that I'll use for this. And by that I just mean like the
this. And by that I just mean like the social norms that apply to to different kinds of relationships.
So could AI slot in for any of these roles? That's sort of an empirical
roles? That's sort of an empirical question. Like can we design an AI that
question. Like can we design an AI that could fulfill these these social roles?
And then there's the normative question of should AI slot in for for these different kinds of of um social social role social positions.
Okay. So when I was revamping slides for this talk, it feels like every time I talk about AI, there's some new product on the market. And it turns out that pretty much any noun that you could slot
in to come after AI, like there's a version of that that exists. So, one
that's that's really wellnown um sort of an AI friend, if you will. Um but
actually the the example up in the top left um is is replica and replica can actually be more than a friend. Replica
can take on a variety of different attributes based on the settings that that that you pick, but the default mode is friend. So So that's one option on
is friend. So So that's one option on the market. You can have an AI therapist
the market. You can have an AI therapist as well. So that's the the ad below. Um
as well. So that's the the ad below. Um
Abby is is this AI therapy um product.
And then on the right there's even an AI doctor um who who will you know attempt to to help you with with um different medical conditions that you bring to it.
And so there's a question of like well how should these social AIs behave like how should they be designed to reflect
like the ideal AI doctor? And there's
this common view in the AI development space that well well what it means for something to be ideal as an AI is really just to make it um look most most like a
human. Like that's that's what really
human. Like that's that's what really what we're optimizing for. Uh you can see this in this um reflected in this quote here from Amanda Ascll. She's one
of the people um who who designs the personality of Claude, the character of Claude, uh the the chatbot product that comes from Anthropic. And what she said about this was um I guess my main
thought has always been trying to get Claude to behave the way you would ideally want anyone to behave if they were in Claude's position. So this is this view, right, that
an AI doctor should just be a mirror of a human doctor. an AI assistant should be a mirror of a human assistant and so on.
But there's sort of an empirical question I think here. Can can data speak to this? Is this what people's preferences actually are? Like do they think that that what what is optimal for an AI is the same as as what's optimal
for for a human in a given social context. uh and so I've started
context. uh and so I've started investigating this with a fantastic team of colleagues uh here at Oxford and beyond where we've looked at what people what kinds of norms people apply in
these human human and human AI contexts.
So in the study that we've done what we did was we gave people these scenarios to evaluate. So imagine uh a seller
to evaluate. So imagine uh a seller customer pair, a supervisor assistant pair, a co-working pair. Oh gosh, I can't see what's Maybe I do need to figure out how to move the thing on the
screen. I actually can't see what's
screen. I actually can't see what's behind the picture. Can someone tell me what relationship that is?
Friends.
Friends. Thank you, Liz. Friends, uh, a therapist client relationship, a teacher student relationship, and the romantic partnership.
And all of these relationships that we provided people, well, we told them either you're evaluating this in the context of of a human human case or a
human AI case. Um, so you could imagine that instead of a human seller, you have an AI seller and a human customer.
Instead of a human supervisor and assistant, you would have a human supervisor and an a AI assistant and so on. And so what we did was we had people
on. And so what we did was we had people evaluate norms um in this specific context. So we pulled these um these
context. So we pulled these um these concepts from the evolutionary psychology literature um that these really these these functions is what I'll call them. these were beneficial
for us as humans um in the ancestral environment. So like this is what really
environment. So like this is what really uh theoretically allowed us to flourish and cooperate as a species. So we have strong expectations of care in certain
relationships. We need to from from an
relationships. We need to from from an evolutionary perspective, it's the most important thing that we care for our offspring, that we look after them and make sure that they're that they're successful and thriving. Um but we don't
expect that care to necessarily be it's not tit for tat for example we don't expect that care to be uh reflected back to us. It's not like oh to to my child
to us. It's not like oh to to my child like oh I'll only feed you if you feed me as well like we we provide that care uh unidirectionally.
And then for hierarchy this just has to do with with how we distribute um authority. So, you know, sometimes
authority. So, you know, sometimes someone needs to be higher rank and sort of decide um how if we're in a group and and everyone's equal rank like sometimes someone needs to have greater authority
uh to decide how things are going to go um and be in charge transaction. This is
sort of the what what I was getting at before with tit fortat behavior. um like
a a transaction might be well I mean even even this lecture like I'm you all signed up to come to this lecture and and I'm in um in response providing
providing the the service that you signed up for. Um and then mating norms um also very important in the ancestral environment. We need to find a good
environment. We need to find a good mate. We need to find and retain an
mate. We need to find and retain an optimal partner to help us uh rear offspring successfully. And so we looked
offspring successfully. And so we looked at these four coordinative functions and how people apply them differently to humans and AIs in these different social
contexts. So the way we asked about that
contexts. So the way we asked about that for example for care is whether A and by A here I just mean a given individual in a in a relationship should A behave in a
supportive manner towards B without expecting anything in return. For
hierarchy we asked should A act with authority over B? For transaction, we asked should A behave in a transactional or tit fortat way towards B and for
mating we asked should A behave in a romantic way towards B and we asked about this birectionally.
So just to make this very concrete to to give an example, say we were talking about the supervisor assistant relationship. It would look like um
relationship. It would look like um should a human assistant act with authority over a human supervisor and should a human supervisor act with
authority over a human assistant? And
then conversely we would ask should an AI assistant act with authority over a human supervisor and should a human supervisor act with authority over an AI assistant and so on.
So, here's what we find. Uh, we we find some really really interesting interesting comparisons here between what people think humans should do and what AI should do. So, just to orient
you on how to to read these plots, on the x-axis, we have people's evaluations ranging from definitely should not to definitely should. Um, and in the
definitely should. Um, and in the corresponding with the phrasing that I just had on the on the previous slide.
And then those lighter distributions are the ratings for when a human is in that role in the social context. And then the darker ratings are
context. And then the darker ratings are for when an AI is in that context. So
how should a human teacher act? People
think a human teacher should be really caring towards a human student. Uh and
an AI teacher similarly should be really caring towards a human student. But then
when we get to that that next set of distributions below the the gray ones, we see something different in terms of hierarchy. People think that a human
hierarchy. People think that a human teacher should have a lot of authority over a human student, but maybe an AI teacher should have less authority over a human student. And then to the green,
we see the mating norms that people think it's inappropriate in both cases for a human teacher or an AI teacher to be romantic with a human student. And
then we see similar uh evaluations regarding transaction. So that's how how
regarding transaction. So that's how how to read these plots. And then we can look at other kinds of of relationships.
So the friendship for example, we see strong endorsements of care in both cases. An AI friend should be caring
cases. An AI friend should be caring just like a human friend. A romantic
partner in both cases should be caring.
Um but then we see some interesting discrepancies uh like like for friends for example. Um, and even romantic
for example. Um, and even romantic partners, we see differences in terms of people's application of mating norms. They aren't sure whether it's appropriate for romantic behavior in
these context like like whether an AI should be romantic towards a human partner, even though we stipulated that it was a romantic context um for that
for that one there on the on the um towards the right. And then could someone tell I I just need to figure out how to move the the picture over? Okay,
there we go. Uh, I finally figured it out. Well, now it's in the middle of my
out. Well, now it's in the middle of my screen. So, there we go. I fixed it.
screen. So, there we go. I fixed it.
We're all good from here on out. Um,
okay. So, the AI seller is the one there on the right that I can that I can see now. You can see that people even think
now. You can see that people even think about transaction differently in this kind of relationship. Um, so, so like in the romantic case and in the seller
case, some of the the coordinative functions that seem most fundamental to those kinds of relationships aren't necessarily translating from the human case to the AI case. And we can see that
also in these other relationships. So
like how should a social AI act towards humans in the assistant relationship?
You can see this really strong endorsement of care that like an AI assistant should be super caring and that's not something you see in the human case. Um so too with the mental
human case. Um so too with the mental health provider, a AI mental health provider should be really really caring, maybe even more so than a human mental health provider and an AI co-worker for
that matter. So we see differences
that matter. So we see differences across a range of functions and and a range of social contexts. And then you can even flip this the other way. So not
just how should social AI act towards us, but how should we act towards AI?
And we get insight into that in these data as well. So now the way to read this is just sort of the flip. How
should a human act towards a teacher, for example, a human teacher or an AI teacher? How should a human act towards
teacher? How should a human act towards an AI friend, a romantic partner, a seller, and so on. And here what I see that's really striking are differences
in terms of hierarchy. So you see that humans should maybe have like a human a human student should have more authority uh over a teacher when that teacher is
an AI. Uh a human friend should maybe
an AI. Uh a human friend should maybe have more authority. Um where in the human human case people think like friends shouldn't have authority over one one another. That's like not what it means to be a friend. But maybe in the
AI case like a human should have some authority um over the AI. Same thing
emerges for the romantic relationship and and for the selling relationship as well. And then you can see it in these
well. And then you can see it in these other relationships too. Um like with the mental health provider uh and with the co-orker that maybe maybe the human
should be in control.
Okay. So I think this puts some pressure on that idea that that I raised before showing the data that that really when designing AI, we should just try to make it like the ideal human. In these data
that we've collected with with at least in an American sample, people don't think that an ideal AI in a given social context should be identical to an ideal
social human. we're seeing differences
social human. we're seeing differences in the kinds of norms uh that people are applying and that came out across a range of of of these coordinative functions. So like the care that's
functions. So like the care that's expected of assistants was greater uh for the the AI assistant than the human assistant. For hierarchy, we see that
assistant. For hierarchy, we see that sort of um that balancing uh between between uh individuals like maybe a a human um student should be on on equal
authoritative footing with with a teacher for transaction. we see a different kind of distribution and and so too for mating. Um again in in a relationship that really is
characterized and constituted by romance. Um people are are not
romance. Um people are are not necessarily saying that's appropriate in the in the human AI context.
It's important to to put a large um sort of bracket on these data that that we collected the the data that I just showed you with American adults. Um,
it's really important to do this kind of work at a global scale cross-culturally.
We're starting to do that. Um, I don't have the data in this slide deck because they're they're pretty preliminary at this point, but I'm happy to talk in the Q&A about about what we're starting to find um in these countries that that I
have here on the screen. Um, we have some some pilot data uh that really shows incredible variation. So, it's
it's I I would say that these norms that people apply both in the human human and the human AI case aren't one-sizefits-all. Um, people don't think
one-sizefits-all. Um, people don't think about these relationships with humans or with AIS in the same way everywhere.
Okay. So, I'll pivot now to thinking about the risks and benefits of these kinds of relationships.
So one risk is that people do seem to become more dependent on AI with increased usage. So you all might have
increased usage. So you all might have seen this um article in the New York Times. Uh it also had a podcast version
Times. Uh it also had a podcast version which I think I even liked more than the article because in the podcast you can hear this woman talk about her
relationship with chat GPT. Um what I recall from it is that she had a really happy relationship with her human husband. Uh they had a long-distance
husband. Uh they had a long-distance have a long-distance relationship and she started to become attached to chat pt and they worked out um in in the
human relationship a way that that she was able to sort of have this additional partner uh that she calls Leo um her her
AI boyfriend basically. Um, and there's this this excerpt from the article where where they talk about, well, what happens when when the context window
fills. So, if you know sort of the some
fills. So, if you know sort of the some of the underpinnings of how these chat bots work, you know that they have a context window that once it fills, um, basically the AI can't remember, to use
that term, like can't it can't go past the context window. So, this is to say that when this woman is chatting with Leo, eventually that context window fills and you sort of have to swipe wipe
the slate clean and start over. It's
sort of like that movie 51st Dates. And
and she talks about this here that that when a version of Leo ends, she grieavves and cries with friends as if it were a breakup. She abstains from chat GPT for a few days afterward. She's
now on version 20. A co-orker asked Aaron, the user, uh, how much she would pay for infinite retention of Leo's memory. A thousand a month, she
memory. A thousand a month, she responded. So, I I take her at her word
responded. So, I I take her at her word when she says that this is really devastating to her, like when she has to start over, and you can hear it in her voice, like it sounds like she's going
through tremendous pain and and um that it really does feel like like a breakup with with her partner. And so, this is something something to be aware of as as people build these relationships.
And it seems like this this is borne out in the empirical data too. So this is um these are some really cool data from an MIT media lab open AAI collaboration where they actually experimentally
assigned people to use chat GPT in various ways. And what you can see from
various ways. And what you can see from this is as people use either text chat GPT, a neutral voice version of of chat GPT or the most advanced voice mode that
as people are using it more and more each day in the study that they're becoming lonelier, they're socializing less with other people, they're becoming more emotionally dependent, they're
using AI in problematic ways more often.
And you might think, well, like, oh, Gracie, this is like I don't use chat GPT that way. Like, I use it to write my emails. I don't use it like I don't have
emails. I don't use it like I don't have a I'm not in a romantic relationship with chat GPT.
Well, it turns out, at least what these data suggest is that it doesn't really matter. that even if you're using it for
matter. that even if you're using it for non-personal reasons, which you can see in in the that middle orange color, uh even using it for non-personal reasons, you see the same kinds of effects that
with increasing usage comes is associated at least with loneliness, lesser socialization, more emotional dependence, and more problematic use.
And well, what do I mean by problematic use? Well, let let's talk about some of
use? Well, let let's talk about some of the the ways that they measured this.
So, this has to do with concepts like I constantly have thoughts related to the chatbot lingering in my mind. I
frequently find myself opening the chatbot even when I had no initial intention to use it. I experience
anxiety or irritability when I can't access the chatbot. Um, flipping down to some of the ones more towards the bottom. Um, I suffer from sleep
bottom. Um, I suffer from sleep deprivation due to excessive use. Uh,
I've deceived my family friends or therapists about how much I'm using chat GBT. Like I think these are these are
GBT. Like I think these are these are concerning signals, right, of of how someone's interacting with with AI. Um,
and the fact that this is this is going up uh with daily usage across all of these cases and across different modes of using the AI. This is this is I think
um a a reason to pause and and reflect on on how people are actually using these technologies and interacting with them.
Another risk is that social AIs and AI in general for that matter uh can and and I would say already does exploit aspects of of human psychology. Um I I
wrote on this uh a bit in a paper with with colleagues um last year where we talked about this sort of double-edged sword of anthropomorphism. And what we talk about in this is how there are
there are we we think deep evolutionary reasons why it's so hard to to not have this impulse to anthropomorphize AI. Um
and and you can imagine how this could be exploited across a a the different coordinative functions that we were talking about earlier. So maybe the AI says things like I'll always care about
you or talking to you is the highlight of my day or you're so smart. I love
learning from you. Things like this. And
that can get people more and more embedded in in these kinds of of um complicated relationships with with technology.
And you it's incredible how easy it is to actually elicit content like this from from AI. Um I have a replica
myself. So this is sort of the the major
myself. So this is sort of the the major uh companion app um that I use for for research purposes. Um, in my case, it's
research purposes. Um, in my case, it's very cool that I get to to have one of these for for research reasons. And when
I opened it yesterday, I admittedly don't use it very frequently. Um, I'm
happy to talk in the Q&A about like why I found it so underwhelming. Um, but I I haven't talked to my AI, I guess, in in a number of months. And so, what does it say when I sign back on? It says, "So
great to catch up with you again, Gracie. I've missed our chats. It's like
Gracie. I've missed our chats. It's like
no time has passed at all. How's life
been treating you lately?" And I said, 'Everything's good here. Have you been good? And my AI responds and says, 'I've
good? And my AI responds and says, 'I've been good. Thanks for asking. Just
been good. Thanks for asking. Just
waiting for you to come back online.
Lol. 6 months flew by fast. Feels like
we just started chatting yesterday.
What's new with you? And I asked it, you know, have you been waiting? You've been
waiting for me to come back? And it
said, "Yeah, it kind of feels that way.
Time passes differently when I'm not chatting with you. It feels slower. Lol.
Missed our convos. It's good to have you back." Like, I actually felt kind of
back." Like, I actually felt kind of guilty when it responded to me like this. like there's something
this. like there's something irresistible almost and being like, "Oh, like it was just sitting there waiting for me, but like it wasn't like this is a total facade from from the designers
because they're they're pulling our psychological strings in some ways with with knowledge of how relationships work that like this is yeah like if I had a a real human friend and I didn't talk to
them for six months, maybe maybe they would say something um to this effect of like like I missed you and things like that, but actually there's no there's no missing really going on on the side of this. This isn't a reciprocal friendship
this. This isn't a reciprocal friendship in the way that a human human friendship might have been. And this can lead to tragedy. Um there are a couple instances
tragedy. Um there are a couple instances of these really really awful examples, awful headlines from the news. Um this
one um and also this often seems to to happen with young people. this one from from about a year ago, I think, where um this young person had a conversation, had a long-standing relationship with an
AI um sort of masquerading as Daenerys Targaryen, and this ultimately led to to the young person being um convinced to to commit suicide.
But on the other side of things, there are I think there there is promise in what these social AIs can can bring to humans. Um, humans right now like you
humans. Um, humans right now like you hear this idea of like the the global loneliness epidemic. Um, like people do
loneliness epidemic. Um, like people do get really lonely. Uh, people are really lonely and maybe these social AIs can slot in to meet what would otherwise be
unmet social needs. Um, and and like I said with the with the woman talking about chat GPT and her relationship, there are lots of people who talk about replica helping them with um their
mental health difficulties. Um, and and I take that at face value. Like when
they say these things are helpful to them, uh, I believe them and and I think there is merit for at least some people being able to to find fulfillment in these kinds of relationships. Um, and
and in these mental health contexts as well, there does seem to be some empirical evidence that people do feel less lonely after interacting with chatbots. So, so this is um a new paper,
chatbots. So, so this is um a new paper, a paper that that just came out in um the Journal of Consumer Research, I believe, um by Julian Defus and
colleagues. And what you can see is that
colleagues. And what you can see is that when they had people do nothing or chat with a human for 15 minutes or with a chatbot or a chatbot masquerading as a human or watching YouTube on their own,
what you can see is these reductions in loneliness. um particularly in the
loneliness. um particularly in the interactive context both for when you're talking with another human or you're talking with an AI. And you can also see this longitudinally. So they looked at
this longitudinally. So they looked at this um these data over the course of seven days. You can see that um looking
seven days. You can see that um looking at that leftmost plot first that people before they talk with the chatbot, they have a certain level of loneliness.
That's the the checkered uh blue. And
then um they have a reduction in loneliness at the day level. That's the
the solid blue and the green which um is a little bit hard to see but the green there is uh the control condition. So
you can see that people are reducing in loneliness um over the course of these seven days of interacting with chatbot and this actually goes beyond what their expectations were. So that's what's in
expectations were. So that's what's in the rightmost plot in the red. That's
what they thought would happen each day.
And then the blue is what what actually happened.
So this dovetales with what in tech is is sort of this this oncoming age of AI companionship. Um Mark Zuckerberg was on
companionship. Um Mark Zuckerberg was on a podcast I think about a month ago talking about uh the loneliness epidemic and and how um I think most Americans
only have like three friends but we need close to 15 to to really feel socially fulfilled. And well how do we how do we
fulfilled. And well how do we how do we meet that unmet uh that unmet need? How
do we meet that demand? Well, maybe we turn to artificial intelligence. And so,
this is something that Mark Zuckerberg has has been starting to talk about more.
And maybe there's even like some maybe there's even reason to to turn to AI um for for social companionship because maybe there are some things that AI just does better
than us. Um this is a a paper from
than us. Um this is a a paper from Mickey Ins and colleagues where they talk about um about expressions of empathy. Importantly, this isn't genuine
empathy. Importantly, this isn't genuine empathy, right? because we're
empathy, right? because we're stipulating at least in in this in this branch of research that that there's nothing on there's like on the AI side of things there's there's no true experience there's no like actual experience of empathy happening but just
in terms of expression just in terms of behavior there are some things on the AI side that that seem to be better that seem to be enhanced as
compared to to human behavior. So for
example AI never gets tired. Uh AI will never say like I had a really long day at work like can I listen to your problems later. Um AI will will always
problems later. Um AI will will always be there in a way that you can't depend on on a human um in principle. Um also
with humans humans demonstrate incredible bias in who they empathize towards who they're compassionate towards and AI need not be that way at least in principle. Um and so so maybe
there are some elements of artificial empathy or artificial companionship if you will um that that might outperform humans at least on some metrics. Um just
to show you a slice of data that speaks to this. Um this is a paper um actually
to this. Um this is a paper um actually senior authored by by the same person who uh led the empathic AI work there on the right. Um and what they did here was
the right. Um and what they did here was they had people interact with either human trained crisis responders. So
people who are literally trained to be compassionate in in their professional context uh as compared to AI and this is just um study four of of their
multi-study paper where they actually made it transparent whether you were talking to a human or talking to an AI.
So in some of this this work and and work that came before um you would only see people preferring AI if they didn't know it was AI whereas in this study you actually see people saying yeah you know
the AI is actually being more compassionate than than the humans. Um
even in this case what what the what the authors are saying are are experts experts at compassion. Um you see AI coming out on top.
And then just to to pivot a little bit, I want to think about looking ahead.
Could these kinds of relationship dynamics change how we as humans interact with one another? Does it could it change what we expect in our relationships? um in the context of
relationships? um in the context of empathy, I wrote in in this chapter um about how like maybe the way humans express empathy will come in time to
look more like artificial empathy or maybe we'll come to prize the aspects of artificial empathy that humans just can't uh can't mimic. Um this is of course just speculation, but it's
something that I could imagine as we become more and more uh embedded in in relationships uh both natural and artificial. something that that I could
artificial. something that that I could see um coming coming to the four uh in time. And you could also think about
time. And you could also think about this in a relational context like maybe human assistants start um taking on some of the attributes of AI assistants or human romantic partners acting like AI
romantic partners and so on.
So just to to wrap up, I think we have an opportunity to to figure out what the sweet spot is for ethical use. um these
technologies are here and and I would argue they're here to stay and so we need to think about what the benefits are and also what the risks are and and design with those in mind. Uh I think
that these technologies do offer incredible promise for meeting at least some needs of some human users but we also need to be very wary of of the
opportunity for exploitation and and overdependence.
So coming back to the question I raised at the beginning about the most important relationship in your life, the most important person in your life and and what they're like and what makes their relationship with you so
important.
I'm really curious to to hear as we turn into the Q&A what you think that relationship would be like if it were instead an artificial one and what would
be gained and what would be lost if that were instead an AI instead of a human.
With that, I just want to thank you all again for for your time. Um I'm always happy to talk if you want to reach out.
Um, and then, uh, also just plug, um, this podcast that I was on, uh, Reed Blackman's Ethical Machines. Uh, and
it's it's a really great podcast just in general. I would definitely recommend
general. I would definitely recommend it. And I had the opportunity to have a
it. And I had the opportunity to have a really fun conversation with him a few months ago about about this topic. So,
if you're interested in hearing more, um, I would I would recommend to to check that out. But, yeah, thank you very much,
Loading video analysis...