Why we need a movement for justice in AI not ethics in AI

5

Table of Contents

Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.

As I have read article after article on the subject of ethics in AI, I have been struck by the alarming absence of what harm actually means in the context of AI: oppression.

As an anti-oppression education organization, a notion that consistently emerges in our work at Fearless Futures is that how we frame a problem informs how we come to solve it. While ethics is a wide and diverse field, our suspicion has been that unless we have a language that speaks to the root issues at stake when it comes to AI we will get nowhere.

Does “ethics” in its mainstream sense — do good? — cover what is required of technologists, policymakers, legislators, and funders to solve the problems described here, for example? Not really.

In our view, the root issue must be that structural oppression is in existence across our communities and societies, and without active transformation of power relations AI will perpetuate, reproduce, and amplify this harm. And if our conception of the problem isn’t framed in this way then our efforts will fail.

If there is a disease of the body and our discourse is centred on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body.

If there is a disease of the body and our discourse is centered on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body.

If we are prepared to dig in and acknowledge the disease of the body, then we will do anti-oppression work. In my view then, the quest is for an AI of justice, not an ethical AI.

I am not an ethicist, so I decided to reach out to Dr. Arianne Shahvisi at the University of Brighton to discuss these questions with her. She was so erudite and powerful, that I thought it would be simplest to share an excerpt of our exchange below.

ME: I am trying to get my head around why people have focused on a narrative of “ethics” in AI rather than anti-oppression or justice in AI. What’s going on here?

DR. SHAHVISI: Ethics deals with right and wrong, fair and unfair, just and unjust, but it is traditionally employed in ways that manage to avoid discussion of oppression. I know that will sound ridiculous and implausible to you, but unfortunately, that’s how it is. I suspect it is a relic of those who have been most influential within the discipline: wealthy white men, usually from a long time ago (the proverbial “pale, male, and stale” writers who are the bulk of philosophy reading lists) who really did/do feel like fully individual efficacious agents in the world, and do not think beyond that positionality. So, when people use “ethics” in an applied sense (“medical ethics”, “business ethics”) they typically refer to the rightness/wrongness of an interaction between two individuals i.e. a doctor, and a patient, a researcher and a participant, a service provider, and a client. Ethics is very often highly individualized and atomistic, very libertarian, and is applied without consideration of power or structural factors. So when someone asks you to consider AI ethics, they will typically be considering individual misuses of the technology, e.g. weaponization, data protection issues, or an individual robot being treated badly.

ME: Hmm, that appears to be what I’ve been seeing broadly speaking. There must be ethicists who do focus on anti-oppression though, right?

DR. SHAHVISI: Yes, as with most generalizations, there are exceptions. Not all ethics is conducted in this ridiculous way, and there is scope for it to include, and even center, structural considerations. That’s what I try to do in my work, and that’s what others working in the philosophy of race and gender attempt to do too (in case you want to quickly scan an example, here is an ethics paper of mine that just came out which openly resists this libertarian streak in reproductive ethics, in favor of structural concerns). I think it’s fair to say that since the work of philosophers like Arendt and Foucault, and the development of feminist theory, many philosophers do consider power and oppression in their academic work, but those subtleties are yet to be transmitted to those people within organizations and sectors who tend to respond only to PR pressure, and often think of ethics as nothing other than a practical box-ticking exercise.

My fundamental instinct is that one can have an ethical position AND that position can also not deliver an outcome of justice. If that’s the case, I feel that AI ethics simply isn’t sufficient for the scale and complexity of informing our work in AI (presuming our shared goal is to end structural harm — which I have to on some level presume is not everyone’s end game). What are your thoughts?

DR. SHAHVISI: Can you develop a position that is ethically sound, according to a particular ethical theory, yet oppressive? Yes, sadly you can. For example: utilitarianism is one school of thought within ethics that tells us that the right thing to do in a given situation is to maximise wellbeing for as many people as possible. Suppose you had a society in which a minority group had been treated very badly, and were now violently resisting, and seemed intent on harming majority groups. In certain readings, utilitarianism would suggest that it was ethically acceptable to kill all of them in order to protect the majority and keep as many people happy as possible. So that would be an ethically acceptable position, but a very oppressive one.

ME: So, what role can ethics play if any at all?!

DR. SHAHVISI: I might have painted a rather disparaging picture of my field, but it’s important to remember that ethics is being recuperated, especially as philosophy slowly becomes more diverse. Ethics can and should include considerations of aggregate human units, rather than just individuals. Injustices can and do occur between individuals, but they occur with much greater frequency and intensity between different groups of people (and sometimes those interactions are mediated by an individual encounter, but also often not), in accordance with robust, predictable trends, relating to distributions of social power. You are therefore perfectly justified in arguing in favor of a broader reading of ethics than the traditional atomistic one, in order to better capture the realities of people’s experiences.

End of exchange!

It’s worth noting that while there is much in the way of superficial writing on ethics in the mainstream technology press, some brilliant voices are leading the way too, Kate Crawford among them. You may have noticed that Dr. Shahvisi and I consider a central concept in our understanding of inequality — and that is ‘power’ and its asymmetries. Kate Crawford argues this too. I leave you with a quote from her for good measure:

“Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.”

So, let’s move from AI ethics to a movement and action for justice in AI. We then finally might get somewhere.

Share this article with a friend

Create an account to access this functionality.
Discover the advantages