LongCut logo

Roundtable Discussion Working Group 1 - AI4People Summit 2025 - Advancing Ethical AI Governance

By AI4People Summit 2025

Summary

Topics Covered

  • AI Apologies Erode Trust
  • Privacy Tech Enables More Surveillance
  • Privacy Protects Against Unfair Modeling
  • Regulation Enables Reliable Innovation
  • Trust Institutions Not Tech Companies

Full Transcript

I mean I can take a first comment. Bard

I think you had mentioned this issue of uh of apologies and how systems that apologize imply more trust.

I I I I you know I think that's definitely a question worth studying because I have to say my personal experience with systems that like chat GPD and others that will constantly make mistakes at which point I will correct

them and which they will apologize and then continue to make mistakes only gets me more annoyed not less and less likely to trust them and not more but I agree this is a question that you know what are the devices through which we

engender trust and and what what mechanisms are useful to build that kind of trust. Yeah, absolutely. Um, we have

of trust. Yeah, absolutely. Um, we have a raised hand I hear from the audience.

Amanda, >> uh, good morning everyone and thank you so much for for the great interventions.

I I had a question. I I saw that more than one speaker has spoken about the dichotomy of fairness and privacy. It

reminds me a little bit of the discussion of regulation versus innovation. And I am increasingly

innovation. And I am increasingly wondering if this is um a narrative that is possibly pushed by the industry rather than what is technically possible. I was wondering whether the

possible. I was wondering whether the any of the panelists has any thoughts whether it's a true dichotomy that we accept or is one that we should challenge uh as a field uh to pursue

both objectives such as privacy and explanability. Um and I'm also referring

explanability. Um and I'm also referring a little bit around you know the counterfactual explanations that I think Burkhard um and David you have um you

started speaking about whether there is there are other forms of desirable levels of informationational transparency where we anonymize data while allowing those that are affected

to understand how their data is being processed to achieve a different form of meaningful transparency while preserving uh privacy. Um, and while I personally

uh privacy. Um, and while I personally agree that there might be specific scenarios where that dichotomy holds true, um, I'm wondering whether if current architectures are not allowing

for this to be true. For instance, the transformer architectures where we accept that it's not a good decision support uh, system. I am wondering whether we should be pushing for

alternative architectures that are more based on on symbolic um, approaches. So

I don't know if if anyone is uh able to and and happy to address this. Thank

you.

>> Excellent. So so we already moved into the next round and sesh you want to take that one on.

>> I'll address the question on privacy and fairness and the perceived dichotomy. I

mean, so I I I I think Amanda, you're not wrong to point out that some of the rhetoric around this is coming from interests and vested interests in the

space. A paper we wrote um that my

space. A paper we wrote um that my student and I wrote um and colleagues wrote recently talked about how privacy enhancing technologies are used. So

privacy enhancing technologies are used as a way to say we can collect data from you but because it's privacyenhancing we won't know anything about it. But what

that just does is actually build up a even more intrusive and excessively surveillent data collection infrastructure with the pretext that you it'll be privacy preserving. But in

practice, all that means is that it increases the amount of surveillance that companies can do on you because they can look at a lot of data in aggregate even though your personal information may not be revealed. So that

the effects and the impacts on communities and people's at large are increased even though it looks like your data is not being revealed. So this idea of privacy as a tool for you know for

alleviating these problems becomes problematic because then it's used as a rhetorical device to do even more surveillance. On the question of privacy

surveillance. On the question of privacy versus fairness or privacy and fairness, I think that also is a it's a confusion caused by a misconception of where the

locus of responsibility lies. If we talk about why privacy is important, one way to argue is that we don't want our data to be used in ways that could be harmful

to us. And that use is important because

to us. And that use is important because for example if a private company is collecting my data and is using it in a particular way we may be concerned about the use for example to say decide who to

hire. We might be concerned they're

hire. We might be concerned they're using information about me that they shouldn't be using to decide whether to give me a job. Now if a regulator whom I might trust wants to go and investigate

this company and needs to get access to data to determine this then my privacy concern is not with the regulator necessarily but with the company and I'd be may perhaps perhaps more willing to let the regulator look at some of my

data to determine whether I'm being discriminated against or not in this in this particular case. And so some of the the the perceived dichotomy between privacy and fairness I think fails to recognize that there are different actors and different with different

responsibilities and different stakeholders and who we want the data to be private from and who we want to make sure that our data is used in a responsible way often are very different and we need to think about that a bit

more carefully.

>> Excellent. Thanks. And there will be ways to get back to that uh answer sesh as well. Um I'm going to hold my uh

as well. Um I'm going to hold my uh mouth here at this point because otherwise we are going to be here for the rest of the hour. But I think David was the next one who raised his hand

before and then we move on to end.

>> Uh sure. Yeah. I mean I think the first thing I'd want to push back on is I just actually don't really see the tension between privacy and fairness. Um we know that there are plenty of cases where in

fact to have a fairer model we need to not know certain information about people that if you include information you actually especially in underrepresented groups will cause the performance of the system to go down if

you know that they're in the if because you're just you know you're stratifying your your data in a more fine grained way and that can lead to actually worse performance. So it's not even this sort

performance. So it's not even this sort of simple fairness by ignorance kind of idea. It's just actually sometimes the

idea. It's just actually sometimes the best thing to have a fair system is for people to not reveal certain kinds of information. Um, so I I confess that

information. Um, so I I confess that that one strikes me as straightforward.

But I I wanted to connect to something Sussia is saying, which is that when we think about privacy, I think it's important uh certainly here in the United States for us to move beyond the simplistic view that so many people have

in the US of data as a property right, as a thing that you can own and sell perhaps in return for access to a service and start shifting towards much more of a use-based conception of

privacy. Um, I kind of don't care who

privacy. Um, I kind of don't care who has a lot of the different kinds of data about me as long as they can't do things that undermine my autonomy, that harm my interest, that prevent me from being

able to take advantage of uh goods to which I have legitimate uh rights and and access. So, we need to shift to

and access. So, we need to shift to that. And I think the same thing goes

that. And I think the same thing goes with you also raised the question of explanability. When we think about

explanability. When we think about explanability, we need to think about why do we want explanability? What are

we trying to use the system to do? My

own view is that many perhaps even most of the techniques we've developed for explainable AI ask essentially answer the wrong question. They answer the

question of causally how does this black box work? But I think almost always when

box work? But I think almost always when we want explanations, we want to know why is the black box giving me the right answer? And knowing the internal causal

answer? And knowing the internal causal structure doesn't help me know the rightness or correctness of the answer of that that's produced by that causal structure. rightness and correctness is

structure. rightness and correctness is an external explanation, not an internal one. So I I think that one of the

one. So I I think that one of the challenges we have is there are, you know, I don't know, 27 different features that people have tossed out as really important or essential for AI

systems. But until you know the use, until you know the context, it's very hard to know which ones of these actually matter, what it means for them to matter, and how we might implement a

system in a way that supports them. Um

but that kind of information uh companies have a vested interest in saying no we have a general purpose system it can be used everywhere for everything which is antithetical to the the idea I'm suggesting

I think I that that that also directly relates to what a for people was trying to do. Um we started in 18 2017 when the

to do. Um we started in 18 2017 when the big problem were things like compass where we dealt with AI systems that had one specific purpose and it was relatively straightforward to know what

they were expected to do and not expected to do and then directly towards the end of the AI act already being more or less agreed generative AI comes up

the uh uh general models the foundation models come up and the parliament rushes into some um necessary but probably not quite as thought through amendments uh

to the act to cover also systems where we don't know in advance what um the operating rules are and I think again reflecting on quite a lot of things

others have said uh before now um my approach probably would be to say they don't have a purpose they they are literature literature devices literature

writing devices where you can't ask as you just said, is there an external reality that corresponds to them? Um,

it's story narrative coherence more than anything else that that we might be able to spot. Um, and you were the next

to spot. Um, and you were the next on the list.

>> Yeah, thank you Burkhard. Well, thank

you for your question, Amanda, and your comments. Um, I guess I would say, um,

comments. Um, I guess I would say, um, human beings tend to to put things sometimes in these dichotoies and sort of create these tensions as a way of

having people pick sides, as a way of advancing a certain agenda. Um, you

know, the Markless Center is in Silicon Valley and so we're we we're surrounded by people who are continually trying to position things in certain um ways. You

you talked about this tension between response uh uh regulation and innovation for example is one of those um uh you know paradigms that people put out there is if you have to make a choice between

the two. Certainly that's kind of the

the two. Certainly that's kind of the techno optimist, you know, wants you to believe that if you have some kind of regulation, it will somehow impede innovation when we know from other

industries and how they've developed that regulations can often help things to move more quickly and to help people once there's a regulatory framework that's present, a liability framework

when people can see who's going to be held accountable for certain acts and actions in uh within an industry. um

that then that clarifies for people what should be done and how those decisions should be made. But I also want to say you know humans as long as they're still involved in the process of making the

decisions around this you try and solve these problems try and go to how can we resolve these tensions if we perceive them they have there is all this work this this technical solution that I

referenced that is not my area of specialty but I uh admire those who have come to identify differential privacy practices for example which are ways of

in you know uh infusing statistical noise to make it difficult to identify any given individual's data in a in a in a data set but don't impede group level

analysis might reduce its accuracy but still allows it to happen distributes it on different devices. Um the federated statistics is is that part of uh

differentially federated uh statistics, private federated statistics which uses discrete individual systems or devices where analysis can be performed so that

instead of sending large volumes of data, user data to a central server, data scientists can send queries to individual devices and still receive an aggregate report back that allows them

to identify trends across groups. So

there are there are mechanisms people have started to develop to try and solve when people put these different you know uh tensions or dichotoies out there and

say um these can't be resolved. I think

our our tendency is to try and still protect both privacy and fairness and um and and hold them as values that we want to advance.

>> Thanks for sharing your thoughts.

Thanks a lot an um are there at this point any other comments from the audience from the wider

um membership that that that is with us today so that we and the panel do not monopolize too much the discussion.

anyone I mean for for for me just to um explain a little bit why I think that there is sometimes a tension between

fairness and uh uh data protection. One

of the things pretty much all of you came back on and said you don't quite believe in in in that um there was this American case um couple of years back uh

husband and wife individually apply for a credit card. They know of each other exactly um their uh credit rating, their income,

everything. Uh so they were really

everything. Uh so they were really really surprised when he got the credit card and she didn't because they knew that she was earning more that um they

both had uh excellent credit rating that pretty much everything was the same and those things that differed all differed in her favor. And that that knowledge

knowing that the other received something that they didn't allowed them then to make that challenge and to say look something is going wrong here. you

claim that uh your your decision was following the rules. Maybe they did but as a system uh as a bank you are very clearly um giving preferential treatment

here to to the male of the species. So

that was for me I think the the starting point to think about um a conflict between fairness and um data protection

privacy that even if the AI were able was able to give a valid reason why her credit card application was um refused.

It would not have been able to give also a valid reason why her husband's was accepted. Um, but you only know that if

accepted. Um, but you only know that if you know what the other person was doing. So it might be that AI technology

doing. So it might be that AI technology ensures or could ensure that that type of problem doesn't happen. I can't see it personally, but then I'm good

old-fashioned AI trained in the 1990s. I

still believe in symbolic reasoning systems. Um, for them it would be really difficult. So, so I'm just wondering

difficult. So, so I'm just wondering given that type of scenario, how your analysis would

factor that one in?

Do I have an ability to interrogate a system what it would have done for people that are

like me? apart from one of the

like me? apart from one of the irrelevant or prohibited parameters.

David, >> I mean, I think that this goes partially to something that Sesh was talking about, which is um I don't know that I need to have the ability to do it, but somebody ought to have the ability to do

it and that somebody ought not be solely the company who has designed and developed the algorithm. um they have all kinds of conflicts of interest that will prevent them from from assessing

that correctly or or sensibly. I do want to say I think there's a really important difference though between assessing that it would have been different if some irrelevant attribute

versus some prohibited attribute because that's shifting. You know, if it's an

that's shifting. You know, if it's an irrelevant attribute that we changed and yet it made a difference, there's a bit of a all right, something went wrong here in the design of the system. That's

something that should be fixable. The

problematic cases of course with with fairness and bias are where there's a prohibited attribute something that we ought not whether legally or morally pay attention to and yet is carrying

information about the long run out you know predictive information in the particular society culture historical practices that we existed right I'm not I'm not saying that there's something

inherently different but you know that was part of you mentioned compass that was part of the challenge of compass is precisely that in fact there were these correlations. Now, they weren't as

correlations. Now, they weren't as strong as the, you know, they they made a lot of mistakes with the way they built the model, but it is in fact the case that in America, if you are black versus white, all else held equal, you

are more likely to be arrested later. I

mean, that simply is an empirical truth about the, you know, current practices in the US. So, you know, that's where I think the tensions really arise and that's where I think we have to be very

careful about how you assess the kinds of counterfactuals you're talking about because they aren't as simple as well, you know, if I had been a woman. Well,

if I'd been a woman, many other things in my life would have been very different. And how do we assess these

different. And how do we assess these gets into, you know, the philosopher in me is very excited about these questions. The technologist in me is

questions. The technologist in me is terrified of these questions. And so, I think that's really where where we have to, you know, those are the ones that I think are the most difficult to work out. and probably should not be left in

out. and probably should not be left in the hands of the company. Well,

definitely should not be left in the hands of companies alone.

>> Yeah, I mean I I I was exactly thinking of what Sesh was saying and then and and you were also hinting at that um some of these decisions the aggregate comparison might be best left to a regulator and

you might we might still feel sufficiently confident to share our data with the regulator to make that sort of assessment rather than the company or

leaving it entirely to our own. Um,

Andreas, you were next in line.

>> Yeah, I wanted to pick up on something that I think is a kind of more general problem behind what you described in that case in which I think um it was

Apple um and it took them I think several months before they could um address that. Um and this more general

address that. Um and this more general point is that not only in the within the field of AI but um many more current

advanced technologies it becomes more difficult for us to say whether they work or broken or what should happen if they work or broken or to what degree.

Um there are many wonderful examples for that. Um but classical technologies in

that. Um but classical technologies in many ways don't um give us the same challenge. If I press on the um light

challenge. If I press on the um light switch, I get an immediate effect whether it works or not. And this is not the same in many context of AI use, but

also beyond AI. And this leads actually to our question of trust and trustworthiness because we are not very capable in many cases to say whether it

works or to what degree it actually works and therefore we need experts in order to do that. I just quickly want

to give you an example um what are the practical challenges in order to assess trustworthiness. I was once involved um

trustworthiness. I was once involved um in a project um in a medical context in radiooncology and the challenge is quite similar in

the sense that patients there they don't perceive um the effects of the procedure immediately. There's a strong beam but

immediately. There's a strong beam but you can't see that. You can't taste it.

You hear a lot of noise but you are not sure whether your cancer cells are actually impacted by that or what is happening. It's very imperceptible

happening. It's very imperceptible and still um your life is at risk in this moment. So when we spoke with um

this moment. So when we spoke with um psychooncologists um one told me that some patients because they don't know how to assess

whether they are getting the best treatment and they sit together in the waiting area and then they start to compare or you get 25 times um these

sessions I get it only 20 am I in a better position or do I get worse treatment so when they start to think about can I trust my um medical

professionals they that they give me the best treatment.

The psychoncologist said sometimes patients may look at the shoes of the doctor and think they are dirty. How can

I trust them?

And I think this is the point of um the practical challenges where we need experts in order to think about how trustworthy AI models

work. But then we have to think whether

work. But then we have to think whether can we trust the experts and this is the kind of network we are in and this is why some of you pointed out we need strong institutions

but then we need um trust and justified trust into um these institutions and I think this is really the practical

not mess just reality of the networks of trust we have.

>> Yeah. Yeah.

sign it. It really reminds me um of of the building next to mine here in Edmir at the moment, the the formerly so-called David Yume Tower. Um where you have on the one hand the enlightenment

that says, "Oh, we don't believe any longer in authority and uh it's it's your own experiments that count and very very quickly you realize no, it can't quite work like that. We can't all start

a new we have to trust our teachers. We

have to trust the right form of expert."

So the question who is the right expert uh rather than expertise versus something else becomes the sort of dominating uh discussion. Uh Suresh you were on the list.

>> Yeah thank you and I'm very glad Andreas mentioned the light bulb because I want to bring this back to a point that an had made earlier about innovation versus regulation. You know Andreas mentioned

regulation. You know Andreas mentioned you can turn on the light bulb and you know it just works. It's not like AI.

Well there's a reason that happens and you know I grew up in India with you know with a lot of voltage fluctuation.

You could not always assume that when you put the light bulb on that it wouldn't just blow up because of voltage fluctuation. And we had to have

fluctuation. And we had to have stabilizers on our on our large appliances to make sure we had steady electrical supply to them. The reason

you can put the light bulb on and expect that it works is because A there is an electrical grid which produces voltage within a particular specification as specified by regulators for that

process. B there are standards and

process. B there are standards and practices for light bulbs to be built in a certain way and test in a certain way.

And C, there's you know electrical systems in the house that are under severe rules. You know, electricians

severe rules. You know, electricians have to come in and make your system up to code so all your wiring is appropriate and sends the right signals into the so that when you put that switch on, the light bulb just goes on.

In other words, there's an entire apparatus of regulation to An's point that allows innovation to happen. Allows

us to have fancy light bulbs that are motions controlled or Wi-Fi controlled or so on. It doesn't happen by accident.

It's there's an invisible architecture of regulation and and institutional trust that gives us this. I mean and Andreas mentioned you know medicine and I'm thinking of COVID and how the

science did not change during COVID.

What happened was a breakdown in trust in institutions and institutional leaders that led people to have you know strong divisions about their belief in vaccines and what we're seeing in the US now with belief in vaccines with belief

in medicine broadly. The science has not changed. our our trust in institutions

changed. our our trust in institutions has changed and I think that's where we have to remember that even with AI we keep talking about trust in AI what we need to have and what we don't have are

the institutional structures that allow that make us feel confident that when an entity uses an AI tool that it comes within the same institutional level of trust that we've had for other systems

in the past and I feel very strongly about it >> which of course poses a problem that that some people who push AI also push decent centralized systems. I'd say we

don't need institutions. We just need to do it by ourselves and that um then reinforces almost that that problem that you identified an

>> yeah I want to keep on this same theme and kind of go back to Andreas's example around the doctors and the patients. Uh

my husband's a a cancer patient and was originally diagnosed with a pretty advanced form of cancer. And one of his first responses to that was, I'm going

to trust in science, right? I'm going to believe that the doc the medical profession can come up with solutions.

They're advancing all the time. if as my my work is to just stay alive long enough for those advancements to catch up with my illness and be able to treat it more effectively than it's treating

it right now. And that comes from a network of uh beliefs that the medical profession has adopted the sort of the

the development of bioeththics in the in the middle of the last century. a set of standards and practices that as technologies new new discoveries were

were being unearthed the medical profession was able to come together and say okay we need some new ways of thinking about this and to talk about malfeasants and non you know b

non-malfeence and benefits and those think those principles again I'm back to principles that underly the decisions that are going to be made in a certain uh industry and business has really

struggled you know medicine had the original the hypocratic oath, right?

Every doctor already was used to making a certain profession uh to you know a certain um obligation towards their

profession and business has struggled to bring forward a similar set of credentials or practices or oaths or

beliefs that serve the common good um towards towards the same those kinds of ends. So we we haven't seen that kind of

ends. So we we haven't seen that kind of um broad industry-wide adoption of a set of ethical practices that people can have some confidence in and I think we

we need that and that it and regulation helps that but so too does the coming together of industry and large players in industry and investors in industry

can make a difference by adopting some self-governance. I mean that's

self-governance. I mean that's effectively what bioeththics became was a pra a set of self-governance practices that allowed people to have confidence in medicine.

>> Yeah. Yeah. Yeah. And again you're you're absolutely right and the importance of the institutional level uh here and and and how we can strengthen

our institutions to make AI work is is I think extremely important. Uh, Andreas.

>> Yeah, I just wanted to clarify on one point um um related to what Surus just

mentioned. Um

mentioned. Um there is in the this example this whole um set of institutions behind you when you turn on the light bulb. That's

correct and that's exactly right about that. I think what is different um and

that. I think what is different um and what I wanted to point to is that we don't necessarily can perceive the effect and judge

whether it's correct or not in the case of AI in many cases at least in the case of light bulbs I get an immediate experience whether it works or not and I can leave the background of institutions

that work for that that it works for me in the background >> regarding privacy um I don't perceive necessarily

um what data is um used um or taken um from me and in many cases I even don't know whether the main

function of um a model is working properly or to what degree it does. I

need experts for that. And so this is not just about the background of institutions. This is also the

institutions. This is also the foreground of institutions in assessing whether it works, not just making it work. Which which which in a way reminds

work. Which which which in a way reminds me to to the old problem we have. If you

speak to your medical doctor, to your GP, and they will tell you you can reduce your chances of cancer or liver

disease by stopping drinking by 20%. and

I stop drinking and I do not get that uh that illness. But I I do sometimes

that illness. But I I do sometimes wonder didn't I get it because I stopped drinking or would I have been in the lucky bunch of people who can drink and

still don't get liver problems. So I think we have exactly the same very often at least the same problem here of predicting something that only happens

with certain percentages. Um and and that does require also changes to the way we use AI systems to essentially do

very very similar things. Um Roger Roger KBY.

Yeah, I wanted to pick up something on uh that Annard mentioned if that was okay when she was talking about the difficulties in terms of

uh having like a singular group of principles that's all-encompassing and that that very much kind of you know trying to get companies to sort of buy into them and sort of run with them. I

guess what I was trying to think was got me thinking about was the fact that we live in an age of social media. Um

AI is available at our fingertips.

There's, you know, you only have to scream fake news and everybody sort of just, oh well, it's just we don't believe it. There is a there's a lack of

believe it. There is a there's a lack of trust and that trust is kind of almost accepted by the public now as part and parcel of the daytoday

world in which we live. Does that make it harder for companies given the fact that there is that lack of sort of bottom up pressure? We kind of just accept it now as a as a population and

that there's very much top down in terms of where that pressure for trust and fairness and all of those good things that we talk about with responsible AI coming from.

>> That was thank thanks a lot Roger that was a direct question for an and she has raised her hand. Excellent. Good

discipline.

>> Yeah. Sorry. Um yeah, I mean I I um accept that that people are have have started to just um accept uh certain

things in society like lower trust. Um,

I wrote wrote a piece just recently about the way that politicians are using artificial intelligence and how their own application of it to to develop

satirical uh kind of representations of their opponents or people that they disagree with really erodess trust in society overall. Trust in our in our

society overall. Trust in our in our elected officials and the people who should be representing us. But I don't think we should accept that. Just

because it's the case that society has gotten more uh numb to some of these realities doesn't mean that we are off the hook. Uh if we're going to accept

the hook. Uh if we're going to accept leadership positions in society, in industry, in in academia, in other uh in technology, in other ways, then we still

have an obligation to try and foster a sense of trust because trust is that is a common good. It is part of what makes society work. Money is trust, right? It

society work. Money is trust, right? It

is the belief that if I give you this this dollar or whatever the currency is that you're using, that it will hold its value long enough for us to make the transaction that we're in the process of

making. And so if you start to lose your

making. And so if you start to lose your confidence in some of these fundamental things that represent the trust that we have in society, it can't function. And

so I think that's and I think that's what a lot of you know public sentiment that is being captured in all the polling that's being done and you know

there's a fairly consistent widespread um lack of confidence in the future that AI is offering us a belief that it is

not going to be a trustworthy AI enabled future um because there is not the confidence in the technology right now uh that the that you would expect to see

If it was really this fascinating new exciting technology, everybody else would be appreciating it as such and the and the levels of appreciation are so um

you know demarcated in our society. It

is really only the people who are building these models and see their potential for solving often in very in cases very narrow applications are

having the the greatest early success that have the confidence in the in AI that you would expect to see um if it was being accepted broadly. So I don't

think we we can't just accept that that erosion of trust. We need to do something about it.

>> Yeah.

Sesh, you are >> I just wanted to violently agree with an we that that that it is while it is correct that we have um we have um lost

a degree of trust, we cannot accept that as the way it needs to be and we also to recognize that there are powerful actors who are exploiting that lack of trust for their own ends. I you know I I like

to say that I'm a card carrying computer scientist but I what I mean by that is that I feel like it is of you know it is part of my responsibility as someone who allegedly has the skills and the knowledge to build these systems to to

think about ways we can build differently the ways we can how do we build back trust. What can we do with technical and social technical and institutional artifacts to help build back trust? What can our tools do for us

back trust? What can our tools do for us differently than the tools that are being presented to us are offering us? I

think the public has and as an points out the public is sensing that what's being offered to us as a bill of goods is a little questionable and that and that I think people look are looking for something different looking for

alternatives and it's it's you know not all of our jobs at least jobs for people like me to provide alternatives that are that are trustworthy that can engender more trust and we should be working hard

on that.

>> Yeah. I know in a way reflecting what both what what what an and and you were saying I don't think the world will um perish because there's a super AI that

will uh decide it doesn't like humans and acts on it. The world will perish because some mid-ranking bureaucrat decides that a really really crappy system is exactly what we need to run

the national health system or nuclear deterrent and then it fails exactly in the way that we would have predicted it.

So, so I think your again your propose your your emphasis on building the right sort of trust in the human institution that then enable um the the meaningful

deployment of AI is really important and >> yeah just one other quick point about leadership. you know, my area is

leadership. you know, my area is leadership ethics and I look at leadership and business ethics and um you know, there are such such vast

implications for leadership in the future that AI is bringing forward. Uh

there I mean there's the reality that if we eliminate all these entry level positions, we don't have the we're not building the pipeline of future leadership that we're going to need for

all kinds of decisions. So there's that sort of on the on the ground impact of artificial intelligence that we're already seeing in in the workplace choices that are being made by

companies. But but what I really want to

companies. But but what I really want to emphasize is the the the kind of leader that we need to shepherd us into a future where AI is going to be as

pervasive as so many are predicting it will be um is going to require a different set of skills. And one of those skills is going to be around developing and uh trust and

trustworthiness giving people a sense that the choices that institutions are making whether they're companies or other organizations are in the best interests of people. And

that's why I say some of the principles that we need to be advancing need to be more explicitly about serving humanity, more explicitly human- centered than some of the principles that are, you

know, that you see that show up on each of the company's different lists. And

there's and those principles are great.

accountability transparency explanability, all those things. But we

also need the things that get directly to long-term inst long-term institutions. Sort of those things that

institutions. Sort of those things that are investing in humanity broadly as um as leadership skills that we value in the in the leader that we're going to need for the future.

>> I'm I'm I'm wondering about one thing just occurred to me long long time ago.

I had to look into the origin of this notion of informed consent of the patient in medical law and and those of you please if I make saying something wrong here immediately shoot me down.

It's a long time and I only really had to do it to double check a footnote. Um

but immediately fell down a small rabbit hole. Um some people were saying this is

hole. Um some people were saying this is a very very new principle. this sort of patient centric um you decide you have to be fully informed about your your

therapy and then you make all the the relevant decisions. It's relatively

relevant decisions. It's relatively recent. Others were saying, "No, no,

recent. Others were saying, "No, no, this this is an old medical principle that goes back at least to to the 19th century and the professionalization of

of of the legal uh the medical um discussion." And when I dug a little bit

discussion." And when I dug a little bit deeper and that's where I stopped more or less was to say, well, both are right. Um because this earlier concept

right. Um because this earlier concept of informed consent was something your medical doctor as a professional decided

was good for you for that specific treatment. So they would decide can you

treatment. So they would decide can you cope with that information? They would

decide what amount of information do I tell you? If they think that there that

tell you? If they think that there that you were unreliable, they would just make stuff up in the hope that you still follow the instruction. If they felt that you needed to understand what was

going on, they would tell you more. But

it was their professional decision rather than you are right. And that

changed around the 1970s,60s7s and and in the 20th century right to this model now where I often feel when I go to my GP look you you're the expert

just tell me what I'm supposed to be doing. Don't give me all these choices.

doing. Don't give me all these choices.

Um I want just your opinion here. I

don't want two different risk trees which are extremely difficult. Um and

and and I'm I'm wondering where also for AI the right level is here. I would

still feel abused and I wouldn't trust an AI system that tries to manipulate me as gently as the traditional medical

professional of the nu 19th century was.

But I also don't want to be left with all the responsibility given that I'm not an expert in many of the system questions that I'm using in AI. So I

don't know it's it's really my my my very open question to you and I got two hands. One from David from from Joe and

hands. One from David from from Joe and I think David was uh the first in uh very very quickly.

>> I think I was the first but I've talked several times and Joe hasn't had the opportunity so I'm happy to to allow him to go first.

>> Oh okay. Um I you know for me much is going to depend on what information and evidence I have that in the case of the AI that it has my best interest at

heart. I've had doctors that actually I

heart. I've had doctors that actually I don't want them to just tell me what they want me to do because I don't actually think they understand what matters to me.

>> Uh if I'm using a system built by a large tech company that has a history of being extractive about people's data and information, no, I'm probably not going to trust it. Um, if it's a system that's

built in Suresh's center at Brown, yeah, I think I probably would trust that they're going to have my interest at heart and I'd be willing to defer some of my uh autonomy. Don't please don't

abuse that, Sesh. Um, so, you know, I think that it comes it comes back to this question of trust. Um, why do we trust physicians? Well, because we think

trust physicians? Well, because we think that they have a fiduciary duty to us.

They have a duty to put our values on at the forefront. And we think that there

the forefront. And we think that there are mechanisms in place, social, political, and legal, to correct things when they don't, right? There's

malpractice lawsuits. There's the

ability for doctors to be sanctioned. We

have none of these things right now.

Companies, tech companies do not have a fiduciary duty to me. They have a fiduciary duty to their stockholders, at least here in the United States, or to their investors. uh there are no

their investors. uh there are no practices or or systems in place to correct things when they when they arm me or do something against my interests.

So, you know, I think the fact I don't trust AI, but it's not because of the AI. It's because of the people and the

AI. It's because of the people and the systems around the AI that are why I I share your your view, Burkhard. I trust

my doctor. I'm like, you're the expert on this thing. Um, and I would not have that reaction towards the vast majority of AI systems that are out there in the world.

>> Yeah. Yeah. No, I I think exactly. I I I trust my doctor considerably more than than Ellen Musk.

Joe, you were uh the next on the list.

>> Yeah. I really like the comment just made by David, especially finishing on I trust the people and systems around AI.

It reminds me a lot about literature within legal legal scholar Helenismbbomb where she talks about contextual integrity and and you want the system to

be trusted. It needs you need to adhere

be trusted. It needs you need to adhere to what she calls contextual integrity which means that and the context is essential which the point I made in the beginning from what which which sector

or which context are we talking about an example of an also or you also mentioned it Burkard is that when you are in the med medical field especially in some

countries where you have public health systems that you kind of expect that they will treat you well and they have a whole backlog of checks and balances and

and and expertise you can't kind of trust on at that moment that that's the point Nissimo makes informed consent is not important anymore. It it's still asked but people really kind of they

know the backside of this kind of industry and sector and knowing in in general I can trust what they advise me to do and and they have no hidden agendas. They will they look out for the

agendas. They will they look out for the well-being of my and not for the money they want to earn about it. So, so this this kind of contextual integrity is quite essential and this also connects to the point about consent and and and

in order to have trust you have to consent and inform consent. You see this also how this is really weak in other sectors like in digital marketing for example on social media where you don't

know the back side of this story and you you and you know that social media are not social. They're meant to connect you

not social. They're meant to connect you to to to personalized advertising because their their business model is advertising and so this creates a kind of okay I should not consent but I

cannot so so consent has different meaning because there is a lack of contextual integrity in in this other sector. So the whole thing about what

sector. So the whole thing about what are we talking about in uh this technology but especially in what kind of context is this technology embedded

is this in this what kind of sector is it embedded and to what extent has this sector of this context built trust already has been accountable for example when things go wrong that you can call

somebody saying this this had a bad effect on me what's happening you can't call meta that your younger son had a

bad fact by using Facebook or using u Instagram because there's nobody to call so they they don't feel accountable for what's so this is this relates a lot to

yeah why why or to what extent can we trust certain kinds of systems >> y um David >> well I just wanted to come in and quickly add I mean I think one of the

interesting things that's starting to happen at least um I haven't kept quite on top of the EU AI act implementation.

Um, but we're seeing in in multiple countries, uh, including parts of the United States is are policy shifts.

Exactly, Joe, to try and address that where there is actually, you know, there are laws that have now been passed that essentially say if companies do not do the following things with their AI, we

will presume that they are liable for the harms that result. So, you know, uh, my kid has an eating disorder because of their time on on, um, Instagram. Well,

if Instagram is not taking certain kinds of precautions, there are now political jurisdictions where they're held presumptively liable. They don't have a

presumptively liable. They don't have a defense against it. They are actually they have the burden of proof to show that they didn't cause this. And I think that that's potentially a big shift

because that now starts to create in the companies a very strong incentive to actually think about the impacts of what they're doing. Not just the impacts on

they're doing. Not just the impacts on their bottom line and their profit for uh for their shareholders but impacts on on the people using the system.

>> Oh before the next to just to connect to that that's what I've written about cooperative responsibility. It's not

cooperative responsibility. It's not only so what you see often now it's being put upon the individual as being you just have to stop less using Instagram that's your problem while it's

a shared responsibility there are other stakeholders at at least the company itself that have responsibility also the regulators have responsibility also different stakeholders are in there and

all too often that's lost out of sight it's the individual need who just needs to do some digital detoxing and then everything will be solved so that's a

different story so Rob you know all very excellent points. Uh I have Funka and

excellent points. Uh I have Funka and then an Funka.

>> Hi there Burkhard. How are you?

>> Hello. I'm fine thanks.

>> Um it's just been fascinating hearing all these thoughts around what trust looks like and really you can't get away from um elements of relational uh trust

really even though we're talking about AI. uh you know one one of the things I

AI. uh you know one one of the things I was thinking about was the trust equation in this context and how that has always really helped me for context I'm a general counsel working within the

um tech sector so you know very very immersed in all of this uh at the moment here in the UK and the trust equation essentially uh talks about trust in

terms of a number of elements you know trustworthiness is about credibility reliability intimacy you know self-orientation comes into it.

Credibility is the confidence in the expertise and the knowledge. Reliability

is the consistency in actions and promises. Intimacy is the emotional

promises. Intimacy is the emotional safety and connection. And then

self-orientation is around the sort of self-interest aspect. You know, am I

self-interest aspect. You know, am I thinking about your interests over mine or you know what where does the balance of interest lie? And I think it's so important when we're talking about AI

and trust that we don't lose sight of the emotional safety aspect. You know,

that's been touched on already uh in some of this conversation, but we can't get away from that. Whether we're

talking about the use of AI and social media or in other, you know, I work in digital health, for example, you know, emotional safety is a key part of what we're doing, delivering for our patients, improving pathways. So I just

wanted to mention that really just to bring a bit of um a slightly different perspective to what trust and trustworthiness means in this context.

>> Yeah. Thank thanks a lot. And I think also a really important reminder we we started in a way with also with David's

first talk with trustworthiness as a property of a system uh that that tries to achieve something good and increasingly then discuss what do we do

if something did not go well and and and and how can we restore it and I think both aspects are of equal uh definitely equal importance and I think the trust equation is an attempt to sort of

strengthen this this this first one uh again um and >> so I originally I had my hand raised because I was going to go back to

something that David said but I want to first u respond to Funky's point uh here um I think emotional safety is a is an important element and I think one of the

places that we're going to have to come to terms with it is in the use of artificial intelligence in education in schools particularly in K through 12 We

saw a lot of new technologies put in the hands of young people when iPads came online and you know every student getting a computer but then without the

guard rails without the limitations to um you know accessing information that wasn't appropriate for a third grader to come across pornography for example while they were trying to research

something for their uh third grade class. And um so we're going to have to

class. And um so we're going to have to be thoughtful about the choices that teachers are going to be making about how to use artificial intelligence for

young people because I I think their emotional safety and their emotional health is such a important attribute of their ability, their capacity to be able

to learn. And so um so just just a a

to learn. And so um so just just a a point on that. But back to to David's point about the EUAI act and the um the

pendulum swinging back. I'm glad to see that and I think um the EU is providing some regulatory or not some the regulatory leadership in the space that

it needs. Um but it's it it's a the

it needs. Um but it's it it's a the pendulum that it's swinging from was a very specific regulatory choice in the United States that governed companies

like Meta. and so prevented Meta from

like Meta. and so prevented Meta from from having to take responsibility for how a young person was affected by their time on their social network uh

networking platforms because of section 230 in the communications act which said that these companies were not responsible for the content that was

appearing on their site. Well, I I come from a part of my background is working in a media company, a newspaper company in the United States back when people still read newspapers and um newspapers

could not publish things that they were not held responsible for. There were

laws that held them to certain standards and there was a code of ethics uh that journalists follow around their reporting responsibilities that they

took very seriously and and um you know and and did take responsibility for the things that are published in their paper. And so for these companies to

paper. And so for these companies to have been given the permission to um you know develop their products without that responsibility that is an example of a

of a liability framework that allowed that to happen. So I'm very glad to see the EU coming back in with some guidelines that are going to hold some of these companies that cross

international borders to answer to a different set of rules.

How how how would that work when a with a printed newspaper and if they print letters by readers? I I honestly don't know. I was but because I mean that is

know. I was but because I mean that is often the the closest analogy you would have, wouldn't you, to to sort of post on a blog or something like that.

>> Yeah. Yeah. I mean, in in in the newspaper world, there was a very clear distinction made between the news part of the paper and the opinion part of the paper and different standards that were applied to those different parts of the

paper. And so the letters to the editor

paper. And so the letters to the editor appear in the opinion pages and are clearly labeled as such and therefore not based in fact or um have to meet the

certain standards. And it's partly why

certain standards. And it's partly why um right now if somebody writes a letter to the editor that's inflammatory, has a lot of claims about somebody, a

responsible news uh organization, would not actually publish that in the letters to the editor, but would actually research it and report it as a as a news story and cover it in its news pages to

make sure that it was being uh you know taking that responsibility as a news publication seriously. So those choices

publication seriously. So those choices are actually made. There are sometimes that letters to the editor are rejected and then pursued as uh news stories because they carry such inflammatory

claims >> which gives them much greater scope of of choices than your typical AI moderating uh software would have.

Excellent. uh tro you were amazingly >> just to to second the point by made by it's even worse that even very recently there was this this discovered November

this month ago that that Meta as a company even held back evidence the harm their their their product had on

children. So they found it out. They

children. So they found it out. They

asked to do research and they deliberately held back their this information as to show how little responsibility they take on this regard.

So just to make that point um so and in that sense I yeah I add another point I'm not forgotting. Um

um I'm coming back and I forgot what I just want to say. Just coming back to that in a minute.

>> Yeah I again for for for me that that is one of these interesting questions. Um

on the one hand it is a good thing that Meta found out about that problem because it means they look at least for that sort of thing. The negative side is they then didn't act on it. From a legal

perspective, we want to encourage companies to do the right thing um or force them to do the

right thing uh without creating any advantage of um willful ignorance and or willful stupidity and that can be very very tricky also from from the

legislator's perspective. Joe, you

legislator's perspective. Joe, you remembered >> remember my point. No in that sense uh if you want to build trust you have to take on some responsibility but of

course as unman sketched there in the newspaper sector and the media in the classical media sector the difference is they have full control of the content they produce. So this also means they

they produce. So this also means they can have full responsibility of things that happen on their platforms and their newspapers and everything. It's a

different story of course when you have platforms like social media platforms that not produce their own content for the most of the time but have other people have contents on their platforms and so that's a different kind of

responsibility to generate trust. The

kind of responsibility they have there is not so much on looking into the content and what's happening but in their algorithm not spreading things that shouldn't be spread that shouldn't get a megaphone there shouldn't be and

that's their responsibility. So in a certain sense the the label division between where is their responsibility ends and where where does it starts again is more within the systems what

they do with what's happening on under systems and for what reasons because spreading for example this information is very uh beneficial from a financial

side because it generates more clicks and more data and that's why more personalized advertising. So for them

personalized advertising. So for them the reason of of of doing something in that regard is not depending on having trustworthy information but is getting money for their shareholders the next

quarter and this means agitating people making people mad and making something that people want to click on. And so

that's a different kind of way of looking at what is their responsibility in this particular sector compared to for example the media sector.

>> Yeah. Uh, Sesh,

>> I mean, one thing that's worth reflecting on, I was thinking back to um, Anne's point about the news in the opinion section of a paper and also what what Joe's been saying about these different mechanics of trust is that,

you know, I I I don't know the history of of of journalism in the sense of I don't know how the news and opinion sections of the paper came to be, but I suspect it wasn't always that way. And I

know there was a time when, you know, there was a very little distinction between what was said as claimed news and what was opinion. And there's all there was a shift in the newspaper industry I think in the 20s in the US

and beyond that. My point is that you know one of the things we know for sure is that the introduction of AI is disrupting a lot of sectors and a lot of processes and a lot of governance

systems that we're used to behaving having behave a certain way and a lot of what we're seeing right now is you know with the you know with the luxury of hindsight maybe 50 years from now we'll

be viewed as sort of necessary to multilibrium in how we work with these systems and I think recognizing right now that we are in a stage of necessary experimentation

that we don't yet have a way of shoehorning these systems into current infrastructure. Maybe it can work, but

infrastructure. Maybe it can work, but maybe we also need to think about new ways of doing governance with these tools and what that line is between new and existing requires experimentation,

requires trying things out and requires that that sense of I know adventure I guess as we try and try and try new ideas that we can't expect that things will either work the way they are or that we have to give up hope and not and

nothing will work. We just have to try things out and that we need to keep that in mind as well >> which in a way um un unless there are

strong views in any one of the audience panel or not I think would be an excellent point to rub this up because in a way it shows where exactly the

problem is. Um on the one hand we know

problem is. Um on the one hand we know that these systems pose significant risk. We know that um bad things do

risk. We know that um bad things do happen but we also have a reasonable belief that things could be different that there are better u institutions

that there are better mechanisms that are not fantasy. We are not talking about um Azimov's free law of robotics.

We are talking about very concrete steps that we all can take to to ensure that these systems really work in the way uh that um supports human uh well-being as

opposed to just a small number of of shareholders of these companies. Um,

is there consensus that this is our final position?

Because that would allow me then to to to very very uh say very much thank you to to all of our panelists, to everyone who who joined us today in in the audience. I I thought we had a a really

audience. I I thought we had a a really really good discussion going on uh here with um for me at least lots of new ideas that I now need to to follow up in

already quite uh overwhelmed timetable.

So I better get to my AI and tell it to write two or three papers for me. Um I'm

totally sure that will be perfectly okay and nothing bad will happen from it. And

before this starts again a totally new discussion. Uh thanks a lot to all of

discussion. Uh thanks a lot to all of our speakers. Thanks a lot to AI for

our speakers. Thanks a lot to AI for people. Thanks a lot to you in the

people. Thanks a lot to you in the audience and I hope to see you all in different parts of your life.

>> Thank you very much. Thank you very much.

>> Thank you. Thanks very much.

>> Bye.

>> Bye.

Loading...

Loading video analysis...