LongCut logo

The Theory of Artificial Immutability by Sandra Wachter (University of Oxford)

By EUI TV

Summary

## Key takeaways - **AI Creates Novel Algorithmic Groups**: AI infers groups like 'dog owners', 'sad teens', 'video gamers', or 'fast scrollers' that humans never used, and these are not covered by non-discrimination law. [06:40], [07:23] - **Real-World AI Discrimination Examples**: Loan applicants rejected for fast scrolling or lowercase letters; job applicants less successful using Internet Explorer or Safari instead of Chrome or Firefox. [09:09], [09:31] - **Traditional Criteria Fail Algorithmic Groups**: Immutability, relevance, historical oppression, and social saliency do not apply to algorithmic groups like browser users or retina movers, leaving them unprotected. [12:44], [19:01] - **Theory of Artificial Immutability**: Algorithmic groups act as de facto immutable characteristics due to opacity, vagueness, instability, invisibility, and lack of social concept, denying control like traditional traits. [24:01], [26:29] - **Transparency Fallacy in AI**: Transparency alone does not grant control over decisions based on eye tracking or heart rate; power and autonomy over criteria are needed beyond black box explanations. [31:02], [32:27] - **Diagnostic Tool for Justification**: Artificial immutability requires justification like child labor laws; assess transparency, stability, empirical coherence, and ethical acceptability for fair competition. [32:37], [33:01]

Topics Covered

  • Algorithms Entrench Past Biases
  • AI Creates Invisible Groups
  • Traditional Criteria Fail AI Groups
  • Artificial Immutability Blocks Agency
  • Transparency Fallacy Ignores Power

Full Transcript

good afternoon everyone my name is Nicola Patia I'm the co-director of the cluster technology technological change in society we are here today

hosting Professor Senator actor who is a professor of technology and regulation at the Oxford um at the Oxford sorry about this my window is closing at the Oxford internet

Institute at the University of Oxford Professor wechter is very well known for research on the legal and ethical implications of AI big data and Robotics as well as an internet and platform

regulation now in going through the biography of of Sandra I realized that her interest was not only on digital technology but also on health stake and med tech and this is also an area in

which our cluster has been trying to push the boundary so we'll be interested in hearing a little more about this I could go uh really uh along the biography that's all available online

but in the interest of having a productive discussion today I will not take the time to to read the biography of Sandra and go go through our various

experiences in in Academia and uh in the policy world now I want just want to say that Senator has accepted to join today to present the paper entitled to the

theory of artificial immutability protecting algorithmic groups under anti-discrimination or the paper is available on ssrn but it's um it is of

node that it will be published in the Italian law review now um we have decided that Sandra would have half an hour to present the paper and go through the arguments in the

paper and after that we'll have a q a with everyone here if you have the possibility to turn your camera on uh please do it uh if you don't mind it's

uh it's better to enable a fruitful interaction if you can't or you don't want to do this it's perfectly fine of course it's just uh at your discretion all right Sandra

um the floor is yours we are very happy to have you with us um and I look forward to the next minutes thank you so much for for the

introduction I am going to share my screen with you now and hopefully you should be able to see the slides now yes I see a thumbs up perfect

um Yes again thank you so much for for the introduction and for inviting me to to give a talk today um as Nicola just mentioned it's on my

latest piece of research which is called the theory of artificial mutability protecting algorithmic groups under anti-discrimination law the other paper

is um available necessary and if you wanted to have a closer look it's 50 pages so I'm not able to go through everything but what I want to talk about

some of the highlights and then I look forward to an interesting discussion after that um yeah let's just get started um yeah the first thing that I quickly

wanted to talk about is that the paper is obviously situated in the question of algorithmic fairness and fairness Nei and again probably in this room I don't

have to talk too much about this it will be very clear why we all need to care about this topic in my opinion at least and the main reason is has to do with

how the technology works right How do machine learning algorithm Works how does AI work well AI is working um to go through a large amount of

historical data that's the only thing that is available and they do this in order to make important decisions ranging from hiring education criminal justice giving up loans giving Insurance

any sector you can think up algorithms are making decisions about people by looking at the past and trying to predict the future so you look at past decisions how they have been made and

you try to predict um the future based on that and that wouldn't be a problem in itself if you were always happy with how we made decisions in the past but we have to

take into consideration you know who gets promoted who gets hired who gets fired who does get the loans in our society who has to go to prison who is being denied education so very often you

will see that there is some inherent bias in how we make decisions this is a grounded in the data that we collect fed into the algorithm and transport from

the past into the future and of course this is not just an academic problem this is a problem that we have already in real life just like three quick

examples very often people use American examples I have um European examples here because to show that that we're not doing much better than um the Americans unfortunately Austria one of the

examples in 2019 used um the Employment Agency use an algorithm to decide um employment benefits and the algorithm

was shown to be sexist ableist and ageist again picking up historical patterns from the past transporting this into the future

here in my adopted country in the UK we had a scandal in 2020 where we used an algorithm to predict the outcomes of the a level so the finals the final exams

that students have to take once they want to enter into University we did this because kovitz was very high at that point and the students couldn't take the exams in person and therefore

they used an algorithm to predict how well they would have done if they were actually sitting in class during the exam and the problem here was that the algorithm started to discriminate

against um students from lower socioeconomic background and especially um students of color the last thing that I have here is um you know automated

content moderation uh that we have everywhere on every platform that we use where it's clear that those algorithms are discriminating against

the lgbtq community so again this is not a surprise in that sense we are aware for for many many years and decades now that algorithms reflecting injustices in

our society and what they are doing is trying to what they do is entringing existing inequalities further so those who are already suffering in our society are those that even more have to pay

going forward when we deploy those algorithms so traditional protected groups but what if I told you that algorithms might group you in a very different way

a different way that we usually don't group people um for example what if I told you Nai is inferring whether or not you're a pet owner whether you have a dog and uses

this information to give you goods and services online what if an algorithm groups you or perceives you as a sad teenager and as tailoring content

according to that group membership what if an algorithm first that you have a gambling problem is offering certain types of services based on you what if

an algorithm infers that you are a single parent and whether you're poor or whether you're a video gamer right and assumes that you're part of that community and is tailoring decision

making onto you well you could say well you know what's the problem what's the big problem of being perceived as a video gamer what could be what could possibly go wrong and in this paper what I'm trying to say is that I think there

could be a lot of things that actually could go wrong so for example look at China where the social credit scoring system is being used to decide whether

somebody is a good quote unquote good citizen so algorithms are being used to decide whether you are acting in accordance with the values proposed in

in our society or whether you're not if you are seen as a good citizen it could mean very good things it could mean that you get better offers in supermarkets if you've seen as a bad that citizen it

would mean that you might not allowed to go to certain universities anymore so in China being a video gamer is something that makes your social credit

score drop so you would actually be penalized for um being a video gamer but this is not just in in China where this is happening this is also Happening

Here in the West for example where we use Flash recognition software for many purposes now ranging from job job decisions whether what a loan should

be giving out so an algorithm is detecting how your retina moves how your face moves how much sweat you have on your forehead and it's making decisions based on that it might also be the case

that you have to pay more if you are an Apple user rather than when you're a PC user it might be if you're applying for

insurance in the Netherlands you have to pay more if you happen to have an address that has a number and a letter in it rather than just a number

if you're applying for a job I would recommend that you use a browser such as Chrome um or Firefox and not Internet Explorer or Safari because if you use those

you're more likely good rejected if you apply online and finally here if you're applying for an online loan the speed at which you

scroll through the online application and the fact whether you use capital letters or not has an impact on whether you get the loan well so let's go back to non-discrimination law obviously those

types of groups don't fund any protections under the law video Gamers fast scrollless people who move their retina in a certain way those are not people that are traditional being

protected Firefox users never had to seek protection under the law in the past and obviously you don't find any protection here and so traditionally we will look at things like race and

ethnicity age sex orientation when we wanted to Grant um when we wanted to Grant protection but not those new atypical groups and so

this is exactly what the paper tried to do it is trying to ask the question what makes a group worth fear of protection because I felt right the way how you move your mouse and whether you use a

certain type of browser whether you have a dog is necessarily something that we should use for automated decision making or for decision making in general

um and maybe the law would open up their arms for protection for those groups if they are similarly situated like the groups that we're already protecting and in order to maybe discuss whether we

should include those new groups I have to ask the question what makes a group worthy of protection how like why are certain groups protected but not others

and so when I started that project I was very much I thought this is going to be a short Quest and it turned out to be a very very very

unincredibly long very exciting but also um incredibly rich and long-lasting uh quest to go through uh you know legal

Theory philosophy political Theory because I thought you know I have a gut feeling when somebody should be protected but it's very hard to move from a gut feeling to actually coming up

with criteria whatever makes a group of fear of protection so I thought this was going to be a short Quest but it took me many many months to actually uh think about that so and I will come as no

surprise if you can fill a library with all those thoughts that um there is no coherent framework of what makes a group worthy of protection people I just have been discussing this

for uh decades centuries Millennia if you will without coming up with a clear framework what I did in turn is I offered a taxonomy

um to best to best summarize the literature as I found it I came up with four typical criteria that would usually come up in that discourse when thinkers

we're discussing whether somebody should be protected in the law or not and you can disagree with that taxonomy others might have different views on that but that's my my contribution to think about

what makes a group worthy of protection and so I'm gonna go through them um and long story short I will show you that none of those criteria actually map

on to algorithmic groups which makes them not worthy of protection under the current love but I'm gonna defend that claim in just a second so what makes a group worthy of protection and the first

thing has to do with immutability and choice what do I mean by that so the law usually only wants people to be judged based on things that they have control

over so it would be illegal to make decisions uh based on immutable characteristics for example if we say ethnicity or age our traditional

immutable characteristics and it would be illegal to use that to make decisions because you didn't have a a hand in acquiring those attributes nor could you

actually change that so immutability is usually a sign of that something should be protected on the law and discrimination law it's when I say law I mean non-discrimination law the other side has to do with Choice there are

certain choices in our society that the law wants to protect for example freedom of religion is a choice and it would be an inacceptable in many cases if you

were to say oh I'm not going to give you uh the loan based on your religion so that idea makes a lot of sense in a human

setting it kind of breaks down when you think about those algorithmic groups that I just mentioned right using a certain browser is not an immutable

heuristic but it might also not be a fundamental Choice same with video Gamers or fast scrollers that's not a mutable characteristic but you can't really say probably that it's a

fundamental choice in the same way that religion would be so it might be that the groups fall in the middle of those both and therefore find no protection under the law the second Criterion that very often

comes up has to do with relevance arbitrariness and merits so the law usually wants to prevent people from using characteristics that have nothing

to do with the task at hand so for example you should not use race or sex or gender when making decisions because usually it doesn't have anything to do

to test with hand right people of color or women are not better or worse at something just because they are part of that group and that makes a lot of sense in in a setting

um where humans are the ones that make decisions with algorithms that logic breaks down because AI makes everything relevant in fact that's the whole

purpose of AI is to find correlations patterns links between data points where humans would never look so it might actually be that you know an algorithm

finds a correlation or a connection or some type of relevance between lighten the colored green and repairing alone liking a dog and being a good worker or having a dog and being a worker right so

if we allowed the relevance Criterion to sanction the use of algorithmic groups we would actually disable non-discrimination law completely because everything would be fair game at

that point the third thing has to do with historical um oppression stigma and disadvantage it will come as no surprise that things

like such orientation and age um are Criterion that have often been used and are still used to oppress and stereotype uh people in our society

so very often those groups can look at and point towards a historical ongoing abiding structural disadvantage that happens with them throughout various

aspects of their life could be in long with education um so it's just something that travels with them in a certain way

again the problem arises here with AI because obviously a video gamer or a fast scroller Internet Explorer user somebody that removes their retina too

fast or too slow cannot Point towards that historical oppression that they've experienced because those are not traditional excluded groups from our society very often people don't even

know that they're part of that group right or Society doesn't know that they're part of that group and I think even if they knew we could have a

discussion of whether or not having a dog you know may is the same type of stigma that would be attached with somebody that for example is a person of color a person with a disability so the

stigma might be different and you could say well you know what if new oppression emerges yes it could be a way to open up that scope but can you even Point

towards new emerging patterns probably not because those algorithmic groups are constantly changing you're part of different groups you're a part of multiple groups you move from one group

to the other at some point that group will dissolve right so today you didn't get the job because you do have a dog but tomorrow it might be because of the color of your car and the day after that

because you have short hair so you can't actually Point towards a new emerging pattern because those groups are constantly changing so historical stigma

and oppression doesn't really fit for those groups either um the last one that I found has to do with social saliency so the literature suggests that only socially Salient

group should be protected so that means socially science means that being part of that group is either very important for the individual and or for society

often the group is being marked by something like solidarity shared culture um some sense of identity so a random group of people is not

something that is socially Salient under the law and therefore doesn't deserve protection but that's exactly where algorithmic groups come in again because very often those groups are not socially

Salient think again about the browser user and how you move your mouse how fast you move your retina this is not something we have even words for sometimes those groups completely defy

any human understanding because it's just a couple of electronic signals that don't make sense to humans whatsoever so I question whether you can even talk

about some type of social saliency talk about having a shared Identity or Community um and so you know the law wouldn't protect those groups because they are

not traditionally socially Salient so according to all those four criteria algorithmic groups do not find any protection under the law and I thought okay that's

um unfortunate because I still think that there is something iffy about the whole situation and so I thought it might make sense to take a step back to add a further level of

abstraction and ask the question why discrimination is wrong right why is it that we need to be mindful of other groups in our society why am I asking

this question well if the usage or the mistreatment of certain groups algorithmic groups is similarly immoral like the usage of groups such as ethnicity or gender then you could make

the argument that those groups should be protected as well long story short in my opinion the usage of algorithmic groups does not invoke the same moral wrong

that it would be without necessity or gender or ability why is that the case again traditionally when we think the question you know what makes

discrimination wrong it has to do with moral superiority it's about demeaning people it's about

considering them of lower moral worth stigmatizing them mistreating them there's a power indifference between those groups right and again this

doesn't really fit with all of those new groups when we think about algorithmic groups because the code that doesn't necessarily experience idea of superiority when they look at the data set they might not even know

who is in that data set for all we know they don't even care who's the data set the only thing that they care about is to optimize a process right to make certain processes smoother make them

more time efficient cost efficient but it's not really to purposefully hold a different group down um and assume that they are of lesser immoral worth if that makes sense and

very often the power indifference might also not be the case because again the groups are constantly changing you might not be part of that group for a long time it might be that you suffer in one

particular decision um you get slightly higher prices for a product but the next day the whole thing is it's rectified again so it's quite questionable if that same power

indifference does actually exist that it would usually with algorithmic ropes and so again um according to what makes discrimination wrong I came to the

conclusion that algorithmic groups do not invoke the same moral wrong I was like okay um I'm gonna have a last attempt in figuring out if there's another way to open up that scope

um to welcome those group um in as well and that had to do with the aim of non-discrimination law basically saying you know if the law got

its way if the law did if we all did what the law wanted what would Society look like and if algorithmic groups undermine that aim in the same way that

using sex or gender would then there's an argument to actually include those groups in the future and again the late richer suggests that the aim of non-discrimination law is substance of

equality or de facto equality it's about equal opportunity eradicating the lingering effects of past discrimination and depression and

domination I'm trying to combat subordination and stigmatization and again this makes a lot of sense when when we think about um human human interaction and how we

traditionally treated how people traditionally treat each other when it comes to discrimination with AI this again is a bit of a problem because

this Gap doesn't necessarily exist right there is not always one group that is treated better than the other um it the groups are constantly changing

um the groups are incomprehensible the groups are very diverse and intuitive so there is no such need of trying to fill that power Gap that vacuum that exists

between those groups because it's actually more um diverse rather than showing that one group is always winning out over the other and so

um groups are not standing in that way because they are not creating the same Gap necessarily that are traditional the usage of traditional group would like

okay um okay I still think that this is potentially still problematic and so I started thinking about um how long the Discrimination law was

just written for humans um because we wanted to keep humans in check but algorithms are doing similar things um holding us back in similar ways so

maybe we just have to think about harm differently and this is where my new theory of um artificial mutability comes in this is my my theory that I developed in that

paper and it's based on the idea that I thought okay if you just break it down to the very very um basic things of what the law once you

to do or what the law wishes for you then it would mean that the law wishes for you to be free and have certain rights have access to goods and services to be able to achieve your life goals

the law wants you to receive an education pursue a profession of Access to Health Care shelter housing um

Financial Services all of that right and so this is really what the law wants you to do

and in my opinion AI hindered that but just in a different way than the law anticipated the law anticipated that somebody that holds power

um will have prejudices against a certain group and will use that power to hold certain people back from

educational opportunities from getting loans from getting access to educations right so algorithms don't have that same type of prejudice when it comes to those

algorithmic groups so humans don't have it in in that regard but the harm is still the same so the harm is the same but it's just the more the perpetrator

and the process of bringing about that harm is different the harm is being brought about not by thinking that women are of inferior worth and therefore they should be held down the harm is brought

about by using random groups ephemeral groups nonsensical groups right that act as de facto immutability immutable

characteristics so those groups act as de facto immutable characteristic in the same way that for example sex would they're just differently created but in the same way that an immutable

characteristic can hold you back traditionally those algorithmic groups hold you back because of their immutability and so I started to think about a little bit more about what I

mean by artificially created immutability and it's quite similar to you know traditional mutable characteristics where you don't have any control over the difference is just their source is different so they are

not from natural sources like the birth lottery or things like that but they're created artificially by algorithms and the paper I go through a couple of those

different types of um artificial immutability aspects and one has to do with opacity vagueness instability invisibility and the lack of

social concept so the idea is if I don't know what criteria being used if it's completely opaque I have no control over them I cannot actually

um prepare a good loan application I have no control over any of that part of that process so the fact that it is immutable to me similarly with vagueness

right I could tell you for example that your Facebook friends will have an impact on whether you get the loan or not but the vagueness of that explanation will not help you to decide

who's actually a good a good friend on Facebook or a bad friend on Facebook so again you don't really have control or agency over that whole process same with instability as I said on multiple

occasions those groups are constantly changing and moving from one group to the other and if you don't if there is no stability how can you actually prepare a good job application how can you prepare for University in a

successful way if the Criterion are constantly changing invisibility we talk a little bit about page recognition software how you move your retina how fast is your heart beat how much sweat

do you have on your forehead all of those things are being measured but you don't actually have any control or rate either they're immutable to you or a lack of social concept as I said

sometimes there is not even a human word um to describe what's being measured it's just electronic signals that are being picked up and in the same way if I don't even have words for what's

happening how can I even think about that uh there is some kind of control or agency around that and those types of immutability characteristics in my

opinion disrupt good decision criteria and so I'm in my paper I talk a little bit about what I think about what I think good decision criteria are and

good decision criteria also will lead back to the roots and the origin of non-discrimination law the whole push towards clear and transparent rules on

how to give out loans and how to get into University in who's going to be promoted was a push from the civil rights movement in an attempt to reign

in nepotism sexism and racism so clear concrete criteria how you can compete with others was something that can be thanked to those civil rights

movements and some of them have to do with transparency and again stability uh empirical coherence you know having an understanding between you know why it's

okay to use grades in order to admit somebody to University because there is some kind of connection between those two days and intuitive and an empirical link and normative and except normative

and ethical acceptability but in my opinion uh AI is completely destroying those good decision criteria that we have you know transparency and stability

that's the opposite of what AI is supposed to do like clear and concrete decision Criterion is not what why you use AI for right ai's are very often

used to come up with their with its own rules um you know machine learning algorithms are supposed to come up with their own rules and very often we don't understand those processes and we don't understand

what kind of criteria are being used and it's seen as one of the perks because then we don't have to give a template to an algorithm anymore we can let the algorithm learn its own rules if that makes sense

same construct with empirical coherence there's a link between good grades and entering to law school but there is not necessarily the same empirical link between liking the color green and

repairing alone and unfortunately that causal link that we usually would want to have is something that CS side is not really interested in and so you know

Jonathan Citron called it the death of theory where we don't actually you know deploy um you know other disciplines such as the social sciences to figure out how

those two data points are actually connected what's the causal link between that correlation and that in my opinion makes it normatively and ethically challenging and potentially unacceptable

and so very often when we talk about those criteria and how they're being disrupted um people ask what should you do and very often people will say well we just

need more transparency let's shed some light of what's actually inside the black box and that will help people to you know understand better how decisions

are being made about them and that's what I call that transparency fallacy is transparency good yes of course yes of course and we need more of that but just

more transparency will only partially remedy um the problem you know me telling you that I'm using eye tracking software is

not gonna magically give you the ability of moving your retina at a different speed right so yes transparency is important but it's not just about

transparency it's about power so when we think about those criteria that we're being that were used we just need to think third and transparency we need to

think do people actually have control and autonomy over that decision-making process do they have the ability to impact decisions criteria such you know in the

same way you would be able to have better grades in order to get to law school do you have the same way of controlling how your face is moving or your heart rate so I think there needs

to be much more discussion around whether you have control um and autonomy and power over the process or not rather than just transparency and that leads me to the

final point which is to say you know it's all type of artificial immutability problematic no of course not in the same way that not all types of traditional

immutability characteristics or they're used thereof is problematic right the paper is designed to be a diagnostical tool the paper is designed not to tell you what's right or wrong it's really

here to tell you that when there is a musical characteristics we need to have further justification of that process or at least opening up a framework where we can ask the right questions and so

traditionally not all types of heuristics are banned from decision making so think age for example age is an immutable characteristic

but we still do have laws against child labor and we would think that's a good thing right and a similar idea has to happen with algorithmically immutable

characteristics that at first sight if something is completely immutable from the perspective of the person that is affected by a decision we need to think about further justification of that use

giant labor laws are acceptable because it's important that we protect children and in the same way we need to have a discussion whether it's acceptable to use an algorithmically created

artificially characteristic in order to make decisions right so this is really about what the paper is about allowing the same type of assessment and justification that we have with

traditional characteristics and if we do that I think we can actually realize the spirit um of the law if you will because what

the law really wants as I said is to level the playing field and what the law really wants is Fair competition and what the law really wants is one to to succeed

thank you very much for for the attention and I hope this was interesting and very excited to have for our discussions with you thanks a lot Sandra the flows will come

at the ends uh but I know there's a psychological that flows in the room and uh I can't thank you enough for uh closing your remarks with a reference to Fair competition being myself

accomplishing lawyer now um I think you so I knew because in reading the paper I understood this but the presentation makes this abundantly clear

you've set yourself against a pretty big question here um and it's a very hard question the question you're going after and uh and that also deserves a lot of Praise uh

it's very it's very important it's uh we researchers uh in social science we we ask the big questions so I have some questions but I would

prefer to go to the room first before I ask them um does anyone want to interact with Sandra and ask a question

all right so we have Emmanuel wants to ask a question and I have to say that Emmanuel I wanted to thank him at the end of my remarks but Emmanuel is with the architect behind this talk here so thanks to him and will for this and you

have the floor thanks so much and man thanks Sandra for your uh for your participation so I'm gonna be a little bit of an episode of update here

um not that I I disagree with what you want to achieve but the way I use thoughts uh discrimination actually in the very beginning you have in your background

sorry can you can you speak uh loud because we don't hear you very well if you can okay I'm gonna try yeah so I'm saying the way it was thought that in the Discrimination though was that we

have this right to enter into contracts uh with the matter but you still not hear me I hear you very badly I only heard something about contracts okay let me let me use my headphones give me a

second yeah sure okay in the meantime we have another question can I yeah yeah thank you uh first of

all um congratulations congratulations on your full professional title Professor thank you for your contribution to science so my question is when it comes to practice in order to

minimize the Discrimination in practice is it possible to establish an independent control mechanism to uh in Charlotte Area 11 that is not transferred to the algorithm or on the

contrary should we provide that as much data as possible without any limitation in order to teach the algorithm that these data are irrelevant to the subject or

um should a different approach be taken like such as force and decision maker I mean the people beyond the algorithm I mean the creators to prove this casual puzzle linked between the decision and

subject is it possible what are your thoughts on it um yeah that that's a fantastic question a very difficult one and unfortunately um yeah you just gave me like three

different ways of how to do it and I think each of one of them we need to do a little about so it's unfortunately not something that I think can be done very very quickly in a nutshell what I would

say is that we just need to have a different way of thinking about um good decision criteria and like the one thing that you mentioned with like relevance

that's the hardest part of it because like just AI just makes everything relevant it's gonna be so much harder to to contest that um because you know there there is some basis for it in the paper what I suggest

is that you have to at least take those five decision transparency criteria or good decision criteria into consideration which has to do with it has to be transparent um it's supposed to be

um stable for sure and that we start really thinking about the empirical connection between the correlated data like what's the causal link and that leads back to the relevance question

right um I think we should not just allow relevant to be something that is correlated but we should really think about whether we can prove that there's

a causal link between that because otherwise we run into the risk at least just a free for all and you can just use whatever Criterion that you want and in addition to that to round this all up

right just because something is empirical proven to be so successful or sufficiently linked with the task at hand like Merit for example there still

has to be some ethical and normative acceptability hurdle to to overcome like so a very simple example would be you know I could I could say I'm going to

um only uh I'm a music teacher and I'm gonna take one student per year and in order to apply to be my student you have to submit a brain scan

and you were like that's weird and also but Mama putting out a problem in the non-discrimination law necessarily or either you link it to like disability but maybe not let's just park that for there

um you could I could come back and say well the reason why I'm doing this is because there's relevance here I could make the argument there is empirical proof signs which there is that if you

look at the person's brain you can read a lot from it you know it's called neuroplasticity so it means that you know if you're a musician for example or a dance or a basketball player or a cab

driver and in London your brain looks different your paths in your brain are different because you're training your your brain a different way so a shark card for me as a music music teacher

could be to say give me a brain scan and I'm gonna check if you actually practice eight hours a day in the way that you just sit in this interview basically right so there is an empirical

connection there probably and it's relevant but then I would still say well is it ethically acceptable to do that right and then we could say well actually really Maybe not maybe there's a different type of assessment that we

could do in its place that is less infringing and less like intimate like a brain isn't terribly intimate and so you could say well instead of giving your brain scan you just have to perform on

the piano for me for example that's a better way of measuring that and I think that is really what needs to happen like is it transparent is the Criterion transparent is it really relevant and is

it ethically acceptable and so rather than just dumping all the data in one part and see what goes we really have to re-elevate or re-evaluate what we have

according to those principles and that will might help us to come to decision criteria that allow people to have more access or more power over the process if that makes sense perfect amazing thanks thanks for your

transfer okay so back to Emmanuel and then Marco now yeah we can hear you good okay good good

good good so uh what I was saying before is that the way I I was taught at the Discrimination though I'm not an expert in playing any means but the way I always understood is that you have this

freedom of private photovic freedom to enter into contracts with primary whomever you want and then you have some cases where something it's the greatest to be wrong and that's

why we introduce uh rules of contain that right and I mean you mentioned those goals of that are obviously notable goals but the

law as it stands does not intervene to alleviate all those harms right like potentially if you're born into a full family this might keep you from like saving somewhere we do have social rights

so it's not the low interface you know those respects to alleviate those kinds so think maybe the way you use those ones in your theory is perhaps being a

bit more heavily based in that it actually can under a lot of funds because I think in traditional land discrimination law we have those funds but we had something that we found

egregiously wrong from a moral point of view to intervene so I'm not sure that the hands provide provide the sufficient uh normal to base and doesn't drive the

point when I'm closing I think if you look at how we behave offline in terms of discrimination we'll kind of the right to to choose who we enter into

contract for something very character in ways so if you faces someone that they don't like the football league sports or I think uh you know sometimes beautiful people have better treatment than nothing people that is completely

argumentary and for individuals it might depart from of those opportunities it might bring about the very same harms but we haven't contenders right so maybe what you are trying to do with

necessarily require a complete re-examination of how we we allow uh people to make architect decisions but that would have maximum parts and

consequences right but a very interesting paper thank you very much for presenting um yes so I think when I when I started

the the piece I um I um I I thought about that too like you know to what extent is it acceptable to limit the private autonomy and freedom of contract

um in in order to to hire whoever you you want right and so I look at this in this paper as well where where I think it's helpful first of all to divide between public and under private sector let's Park public

because the public sector wouldn't be allowed to do that anyway because they have a duty to be fair um so like you know if if I'm working in

a Ministry I cannot just decide that I'm only gonna hire people that like Ben and Jerry's that would be illegal but if I'm in the private sector most

people would say it's acceptable to only hire people that like Ben and cherries right and so very often it comes up like why you know you know if the private sector can be arbitrary you know

why should that be a problem and so um there are always those two forces that will fight against each other and so the non-discrimination law came in to

say you know you have all the freedom but those are the criteria that you cannot use and the point of it was to you know those criteria were only were not added because they have something in common that's the interesting thing

about the things that are read there it's not because of anything in common but because they're being used to hold you back right so there's not something you know it could have been dog owners

but it just didn't happen that it's dog owners and so when I'm making the argument right if similarly something is holding you back that is comparable to what is holding you back when you're

applying for a job and you have a disability then that type of stuff should be similarly protected right non-discrimination law is not just about randomly selecting um criteria that you shouldn't be using

on discrimination a lot it's very basis and this is when you look at moral philosophy um and political Theory it's really about about equality for all regardless of groups

like The End Game of the law is that you have Fair competition over certain resources right and so unevening that Level Playing Field is always a problem

it just depends on the law steps Inland says enough is enough right and so I'm saying that we're tipping at that point where that enough is enough could happen not because it's a new group like dog

owners but because the groups are random and because they're ephemeral and because they don't make any sense right and it's just not possible to fairly compete anymore because at its very

basis the law wants you to have like Fair competition because if the law actually goes away and that's really true the law wants you to be independent the law wants you to be self-sustaining

the law wants you to be able to fulfill your life goals and that means among other things applying for a loan getting a job having access to goods and services like health health care and

education and shelter so those high risk areas if you will those are things where people accepting that completely random decision making is at least morally problematic and so

um I'm not saying that that means that everything has to be um illegal or that you should not be able to have contractual Freedom what I'm asking for is that people who

deploying those systems think about whether or not they're hindering people from entering the market by doing that again that's not a problem to use in immutable characteristic use dog owners

to let people go to Oxford for all I care if it's stable if everybody knows about it if it's not constantly changing if there is some kind of proven

connection between having a labradoodle and being or a law fine unless it's morally apprehensible right that's all fine it's really more about make sure

that what you're using enables Fair competition rather than taking away the the ability of people to make free and equal decisions of who they want to hire

it's really about that rather than telling them what not to do if that makes sense yeah that's perfect uh Sandra I'm gonna I'm just gonna do a thing here

um I'm gonna ask if everyone accepts to stay five minutes longer over the hour because I I can see that we're collecting more questions from the audience and I want to give you a fair

shot at them so we'll go first with Giacomo and Marco um if that's okay thank you thanks a lot Sandra just a

quick uh with a clarification question um I was asking myself in this theory that you are proposing what is specific to AI

uh while you were talking I'll try to do what I call a car salesman test checking whether what would be the

typical approach of uh these experts in selling cards cards would be when facing non-experts so they would use what we

normally would say a lot of tricks to uh you know tailor the offer and if you want to discriminate the offer to to the different customers uh in a way that if

I follow if I try to follow your approach would be in some cases uh perfectly fine in some other cases unacceptable

and sort of thing for example impulsivity which is something that we it's completely out of control and and certainly these guys are using a possibility of their customers when

making the deal with them so my question is what is specific to AI right so first is to say that just because you look at non-discrimination

law as one basis of governing the system doesn't mean this um disqualifies or disables other types of Regulation so just because non-discrimination you know could be helpful for that purpose doesn't mean

that Consumer Protection Law could not and the things that you're discussing for example you know some of that are being used you know are prohibited or a lot in a certain circumstances under

non-discriminate Consumer Protection Law or maybe even competition or depending depending on who The Envision perpetrator is and the envision a victim

in that sense so they're not rivals in that regard I think this the the the the the the the the novelty or like the eye

specific thing has to do with the fact that the decision Criterion are ephemeral appear completely random have

no human sense I know representative in human language and are com often very nonsensical like if I run my business I

would never use those criteria in that sense because I don't even have a language to describe that to begin with and I wouldn't constantly change it either right and I wouldn't just use a

random collection of things without thinking about how they actually feed into each other why grades for University applications why salary for

for loan applications right AI does that AI does that seemingly random un-understandable constantly changing modeling there it's doing that that

makes it different and the law hasn't accounted for that type of decision making because humans would never do that and this is why I think the law just needs to be interpreted in a different

way because the harm is the same you're being pulled back but you're being pulled back by a maniac algorithm that is making completely kafka-esque decisions that you don't understand

which would traditionally probably happen in a human setting so it's really about you know trying to close that Gap because the modern the process is different and the perpetrator is

different but the harm is the same but this is only because the algorithm Works differently than the human and unfortunately in those ways that the law didn't anticipate so there is uniqueness

there and it's creating a new harm that I think is not unfortunate not being being spoken for at the moment foreign sorry I was saying perfect I was asking

Marco to to chime in okay thank you Professor it was a fascinating presentation so many interesting follow-ups but for the sake of briefness I will follow up briefly on your point about at the very end about the diagonal

diagnosis how this framework helps us diagnose these various issues and I want to hello because on your response to a manual you had mentioned a lot about how decisions can be arbitrary they can

random but there are circumstances in which we actually want decisions to be ran random there has been this kind of Revival of startition methods and so on

do you think that that this framework could help us understand the situations in which decisions instead of we're trying to diagnose Randomness help us to find when decisions are not as

randomized we'd like them to and just a brief second point also where at last like to ask you if this is something that is part of the core of your

interior if it's something that's not really a use case that you have in mind for that is that weather you think that this approach could help us understand situations in which those automated

groups emerge have uh autonomous relevance so to say instead of being merely proxies of variables that are closely associated and could be discriminate discriminated with because

from what from what I understand if I talk there are relevant situations in which you are not simply concerned about proxies and this might have an important and importantly in transcription I'd

like to thank you for feeding this out yes thank you um absolutely right so like so

a question from a purely legal standpoint some people would say that according to non-discrimination law we're not talking about other laws but just an discrimination law random decision making is acceptable because

you're not using a protective group to make decisions right so there are people in that area that would say I don't care as long as you're not using ethnicity

it's fine or in a gender or ability and there are others that say well actually no because it is actually linked to

Merit and they link it back to Aristotle for example who said you know if you have two people who want to play the flute and there's only one flute the person that has the most Talent should get that flute and you should not give

it to a random person you should give it to the person that most deserves it right and again where you land on that Spectrum really depends on where you are

as the political theorist or the legal theorist or the philosopher um when you think about what's the spirit of the law is the spirit of the

law just to say don't use any of those characteristics that are prohibited because that's what I want period everything is else is fair game saying yes random is fine or are you in the

other camp and says well actually the best person for the job should win and then Randomness is no longer acceptable right and so I'm I'm more in the second camp where I think it's about Fair

competition and giving the person isn't a shot that actually has the best promise for all of that so like Randomness again depending on where you fall down and the theory side in my

opinion wouldn't wouldn't work and especially when you then think about certain types of professions right it might be it doesn't really matter you can flip a coin when it comes to showing advertisements about shoes but maybe if

you want to admit something to med school Randomness is no longer acceptable because there's action Associated risk of not hiring the best person for their job because at some point the person will be a practicing

doctor so again it probably depends on where you look in the Spectrum and to your second Point yes I think that um we might be moving away from the

proxy problem I still think this is incredibly important but there are ways of manipulating a data set so that the group does no longer function as a full

proxy and people are doing that on purpose to escape non-discrimination law so because of non-discrimination law you would say you only have a problem if your disadvantaged group is being made up

majority let's say of black people then you have a problem so you could just you know throw a couple of extra people in that group to make it look more diverse and pass on the non-discrimination law

so there's an incentive to actually dilute the group in a way um to show that you're not being discriminated against that's the one thing and the second one has to do that it's really

really hard to prove causality to begin with right and so you know it's very clear to have a connection between you know Banning

headscarves from the workplace and discrimination based on religion but not allowing dog owners to enter University how does that correlate with protected

attributes right it might not be possible to prove a connection that might not even be a connection right so because there is no focus on causality and because there is incentive to dilute

their groups I think we kind of come up with groups that might not be proxies for any protected group whatsoever this is the whole point why I thought I need to think about this more because it

might not be the only worry that we have to think about when it comes to bias foreign we are two minutes before four uh I'll give the flow to Daniel and Marta in a

row Daniel okay thank you Nicolas thank you very much Sangha for your very interesting uh talk

my name is I am Professor for democracy in the artificial intelligence and democracy in Florence my question is about the theoretical assumptions of the

the very idea of fighting discrimination and can we solve the problems of fairness if

we don't agree as Democratic Society on the very idea of fairness there are those Democratic ballistic

societies there are those who defend abstract equality and those who argue in favor of an equality that could be

achieved through certain positive discriminations those who give preponderance to equality of origin or quality of results my question is

whether or not in your opinion there is an algorithmic and Technical procedure to circumvent to to avoid escape this question thank you

thank you and then we are taking Martha's question immediately thanks a lot thanks a lot Sandra um I find your presentation is super

stimulating it triggers a lot of questions uh I'm a big fan of your work so um I just have like one question concerning um the issue that you were mentioning

about like giving the power back to the individuals who accepted to automated decisions and how that can be your personalized through the theory of harm but I I had that triggered my question

as to how that interact I know in your case because you've made a comparison between the US and the EU Etc but how will that interact with the

um with article uh 22 of the gdpr on the um right of individuals we inform about uh automated decisions and profiling because uh this I'm not gonna like make

this into a comment but just like thinking I work mostly on telecommunications uh law or does my background at least and they are

e-privacy kind of have been proven proven to be a bit like not sufficient when it comes to the cookie settings so if we consider somehow like the way in

which users give consent to cookies and these are something that was discussed also a lot when when the gdpr was created and this is something that is giving a lot of uh Pro is giving rise to a lot of problems in the discussions

about their privacy regulations how can those examples both gdpr and e-privacy kind of be somehow like provide inferences for for your for your

work on the right to justifiable inferences so reasonable inferences thanks okay we have four minutes Sandra uh and

then we close so uh good luck with this but I'm sure yeah good thing good luck you're up for it um thank you yes so the first question yes I think that's very important

defining fairness I could not agree more that's one of the most important things I get it was afraid I think it's really really important that there is no single

definition of fairness nor is that a bad thing right so the idea that we have to come up with a common idea of what fairness means is completely irrational

and and deluded and also not what the law wants I wrote a paper actually on this which is called why fairness cannot be automated and the title already says it cannot nor should it be automated the

things that I'm talking about if we really think about like what I say here in this Theory piece I think this is a discussion you can have globally because like the the this doesn't really only

refer to um European legislation like in a paper I looked at the uh Canada I looked a bit of it at India I looked a bit at um at

the us as well because it was about like Theory and like just what thinkers think about that when we think about operationalizing it and like triggering down to the Frameworks

um and trying to automate or operationalize what the law actually says we need to be very aware that those competing fairness Notions will come into clash and so what the US thinks of

fairness what's written in the law is completely different to what Europe thinks is fair and acceptable um under the law right Europe is really about sustenance of equality substantive

equality is really about trying to level the playing field it's trying to take an active role to make the wrongs of the past good again right taking the active

role whereas the US for example is more like no my idea of equality is I'm just gonna do not gonna do anything it's negative Freedom it's about I'm not gonna look at people's different experiences I'm not going to take gender

or ethnicity um or sexual orientation and consideration I'm just gonna pretend everybody has the same experience and pretend gender doesn't exist I'm gonna

pretend that ethnicity doesn't exist and whoever wins in that job interview wins fairly because everybody have the same conditions right that's a very different

way of thinking about fairness and so when you actually operationalize this or build an algorithm or you know test for certain things those assumptions have to be very very clear

um and so again I wrote a different paper on that where I showed that the majority of the tools that are being developed are actually legal in Europe because they operationalize the U.S

Assumption of fairness and Equity that doesn't bode well with what Europe actually wants but because we just talk about this you say the word fairness without actually figuring out what we

mean by that we are in this conundrum now right and so it just happens that the majority of the fairness tests actually are not acceptable in Europe so yes I could not agree more we really

really have to think about what we mean by fairness especially when we're implementing um it into um into the

um into the real world at some point um and not to the the Privacy question yes so I think um that for me

data protection and non-discrimination are distinct but they're definitely sisters or like let's say they they uh they have become sisters who might

always have been sisters but let's just say I put some in the same room um for some reason so I think whenever we think about you know algorithmic systems we really need to think about

both the Privacy implications as well as um the non-discrimination aspect there I wanted to write more in that paper of how it relates to write a reasonable

inferences um but I felt it would have distracted from the discussion I think I'm gonna have another piece where I really bring this in because I also had the need to

link it better but what I'm going to write has to do similarly with that fact right so for me as then the

reasonableness has to do with what is acceptable to learn from people from a data set is what you're learning um relevant and again we talk about

relevance what that actually means now I'm proposing what I mean what relevance is acceptable or how we should Define relevance and when you have that and

then you can contest that basically if those rules have not been followed so that means we need to have those standards in place where we really think about is it acceptable to infer certain

characteristics dog ownership about people is having a dog actually relevant to the decision at hand and if there are standards which are still missing in my opinion once we have those standards

then the right to contest could be extremely helpful in the gdpr because then you could go back and say well actually you looked up my browser and there's some evidence that that's actually you know doesn't say anything about the person and therefore you

shouldn't be inferring this to begin with right so that's how I hope at some point those two things will will come together but I agree I didn't spend too much or not enough time in that paper of

linking that but it is something that I'm planning on doing okay amazing Sandra so uh look you know we've we've crossed the the 4 pm line uh for a few minutes now and you know I

would have myself like three or four questions to ask you I'm interested in your concept of nature versus culture I'm interested in understanding if your analysis works with

forms of discrimination which are not negative things like affirmative action and so on and so forth um and so as I was telling you in the

chat we had before uh the talk um there will be opportunities for you to come visit us in person here uh We've understood that you have past papers which are highly relevant to our work

and that you are trying to um elaborate uh further uh the ID that you've expounded in this paper so yes there will be another opportunity

in the game of in the in the language of conscient economists I would say this is a repeated game I don't know if it's infinite but uh certainly it's repeated and so we will be very pleased um to

welcome you here when you have a new research to discuss and and share uh with us so you deserve a round of applause we are very grateful thanks a lot and see you soon

thank you so much for the invitation uh thank you for taking the time today thanks a lot thank you thank you bye-bye all right bye everyone see you soon

Loading...

Loading video analysis...