Skip to main content
Top
Published in: AI & SOCIETY 2/2023

Open Access 21-05-2022 | Original Article

Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms

Authors: Benedetta Giovanola, Simona Tiribelli

Published in: AI & SOCIETY | Issue 2/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
1
This technical pathway to achieve algorithmic fairness via the development of non-biased ML models also emerges in the wider and more general debate on the ethics of algorithmic decision-making and ML (Barocas 2014; Shah 2018; Diakopoulos and Koliska 2017; Giovanola and Tiribelli 2022).
 
2
According to Rajkomar et al. (2018), a weak form of equal outcomes is ensuring that both the protected and non-protected groups benefit similarly from a model (equal benefit); a stronger form is making sure that both groups benefit and any outcome disparity is lessened (equalised outcomes).
 
3
Rajkomar et al. (2018) enucleate these metrics with an example that considers African American patients as a protected group: ‘A higher false-negative rate in healthcare ML prediction would mean African American patients were missing the opportunity to be identified; in this case, equal sensitivity is desirable. A higher false-positive rate healthcare ML prediction in might be especially deleterious by leading to potentially harmful interventions (such as unnecessary biopsies), motivating equal specificity. When the positive predictive value for alerts in the protected group is lower than in the nonprotected groups, clinicians may learn that the alerts are less informative for them and act on them less (a situation known as class-specific alert fatigue). Ensuring equal positive predictive value is desirable in this case’ (p. 5).
 
4
Foundational questions about discrimination are familiar to legal scholars, too, and in recent years, in particular, there has been a renewed interest in philosophical questions about anti-discrimination law (Khaitan 2015; Hellman and Moreau 2013) aimed mainly at defining under what conditions discrimination ought to be prohibited. The focus of these inquiries, however, is on discrimination rather than on the relationship between discrimination and fairness.
 
5
However, drawing on Dworkin (2000), Waldron (2017, p. 14) acknowledges that not every discrimination is wrongful; in fact, there might also be forms of unequal treatment or ‘surface-level’ discrimination that do not imply any moral wrongdoing, but rather are justifiable by an appeal to the whole range of human interests, as in the case, discussed by Waldron, of firefighters being selected for their physical fitness.
 
6
Darwall (1977) introduces the well-known distinction between recognition respect and appraisal respect, whereby the latter depends on the appraisal of a person’s character. Darwall’s account of recognition respect has been further elaborated on by Carter (2011), who develops the notion of ‘opacity-respect’; that is, recognition respect expressed through the idea that we have to treat every person as ‘opaque’, respecting them on the footing of moral equality, without engaging in an assessment of their personal merits or demerits (Carter 2011).
 
7
The question remains open regarding what ought to be distributed, such as with resources, opportunities or outcomes. This is the well-known issue of the ‘metrics’ or ‘currency’ of justice: for an overview, see Brighouse and Robeyns (2010).
 
8
Just to mention one of the most problematic cases, consider racial injustice; it is historically rooted, socially shaped and institutionally entrenched in distributive policies: see Shelby (2016), Kelly (2017).
 
9
The luck egalitarian account of fairness has been widely applied to the domain of health and healthcare, arguing that people’s health and the health care they receive are just when the effects of bad luck only are neutralised (Segall, 2010). For an alternative proposal, drawing insights from Rawls’s principle of fair equality of opportunity, see Daniels (1985).
 
10
For a more detailed reflection on respect for particular individuals, social relations and interpersonal relationships, see Giovanola and Sala 2021; for an inquiry into the ways in which technology impacts on social relations and interpersonal relationships, as well as on individual’s agency and sense for justice, see Giovanola 2021.
 
11
The issue of epistemic injustice was first extensively discussed by Fricker (2007), who distinguishes two forms: testimonial injustice and hermeneutical injustice. The former occurs when a speaker is given less credibility than deserved because of an identity prejudice held by the hearer; to suffer a credibility deficit in turn impedes one’s capacity as an epistemic agent, making it both an ethical and an epistemic wrong. The latter occurs when there exists a lack of collective interpretative resources required for a group to understand (and express) significant aspects of their social experience.
 
12
We thank an anonymous reviewer for pushing us to make these issues explicit.
 
Literature
go back to reference Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Polity, Medford Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Polity, Medford
go back to reference Brighouse H, Robeyns I (2010) Measuring justice. Primary Goods and capabilities. Cambridge University Press, CambridgeCrossRef Brighouse H, Robeyns I (2010) Measuring justice. Primary Goods and capabilities. Cambridge University Press, CambridgeCrossRef
go back to reference Char DS, Shah NH, Magnus D (2018) Implementing machine learning in health care—addressing ethical challenges. N Engl J Med 378(11):981–983CrossRef Char DS, Shah NH, Magnus D (2018) Implementing machine learning in health care—addressing ethical challenges. N Engl J Med 378(11):981–983CrossRef
go back to reference Cotter A, Jiang H, Sridharan K (2018) Two-player games for efficient non-convex constrained optimization. arXiv preprint arXiv:1804.06500. Cotter A, Jiang H, Sridharan K (2018) Two-player games for efficient non-convex constrained optimization. arXiv preprint arXiv:​1804.​06500.
go back to reference Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, pp 4691–4697. https://doi.org/10.24963/ijcai.2017/654. Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, pp 4691–4697. https://​doi.​org/​10.​24963/​ijcai.​2017/​654.
go back to reference Dworkin R (2000) Sovereign virtue: the theory and practice of equality. Harvard University Press, Cambridge Dworkin R (2000) Sovereign virtue: the theory and practice of equality. Harvard University Press, Cambridge
go back to reference Eidelson B (2015) Discrimination and disrespect. Oxford University Press, OxfordCrossRef Eidelson B (2015) Discrimination and disrespect. Oxford University Press, OxfordCrossRef
go back to reference Eubanks V (2018) Automating inequality. How high-tech tools profile, police, and punish the poor. St Martin’s Publishing, New York Eubanks V (2018) Automating inequality. How high-tech tools profile, police, and punish the poor. St Martin’s Publishing, New York
go back to reference Ferguson AG (2017) The rise of big dtata policing. Surveillance, race, and the future of law enforcement. New York University Press, New YorkCrossRef Ferguson AG (2017) The rise of big dtata policing. Surveillance, race, and the future of law enforcement. New York University Press, New YorkCrossRef
go back to reference Forst R (2014) Two pictures of justice. In: Justice, Democracy and the Right to Justification. Rainer Forst in Dialogue, Bloomsbury, London, pp 3–26. Forst R (2014) Two pictures of justice. In: Justice, Democracy and the Right to Justification. Rainer Forst in Dialogue, Bloomsbury, London, pp 3–26.
go back to reference Fricker M (2007) Epistemic injustice: power and the ethics of knowing. Oxford University Press, New YorkCrossRef Fricker M (2007) Epistemic injustice: power and the ethics of knowing. Oxford University Press, New YorkCrossRef
go back to reference Giovanola B (2018) Giustizia sociale. Eguaglianza e rispetto nelle società diseguali. Il Mulino, Bologna. Giovanola B (2018) Giustizia sociale. Eguaglianza e rispetto nelle società diseguali. Il Mulino, Bologna.
go back to reference Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22):2402–2410. https://doi.org/10.1001/jama.2016.17216CrossRef Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22):2402–2410. https://​doi.​org/​10.​1001/​jama.​2016.​17216CrossRef
go back to reference Hellman D, Moreau S (2013) Philosophical foundations of discrimination law. Oxford University Press, OxfordCrossRef Hellman D, Moreau S (2013) Philosophical foundations of discrimination law. Oxford University Press, OxfordCrossRef
go back to reference Kelly E (2017) The historical injustice problem for political liberalism. Ethics 128:75–94CrossRef Kelly E (2017) The historical injustice problem for political liberalism. Ethics 128:75–94CrossRef
go back to reference Khaitan T (2015) A theory of discrimination law. Oxford University Press, OxfordCrossRef Khaitan T (2015) A theory of discrimination law. Oxford University Press, OxfordCrossRef
go back to reference Lippert-Rasmussen K (2013) Born free and equal? A philosophical inquiry into the nature of discrimination. Oxford University Press, OxfordCrossRef Lippert-Rasmussen K (2013) Born free and equal? A philosophical inquiry into the nature of discrimination. Oxford University Press, OxfordCrossRef
go back to reference Mansoury M, Abdollahpouri H, Pechenizkiy M, Mobasher B, Burke R (2020) Feedback loop and bias amplification in recommender systems. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery, New York, NY, USA: 2145–2148. https://doi.org/10.1145/3340531.3412152. Mansoury M, Abdollahpouri H, Pechenizkiy M, Mobasher B, Burke R (2020) Feedback loop and bias amplification in recommender systems. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery, New York, NY, USA: 2145–2148. https://​doi.​org/​10.​1145/​3340531.​3412152.
go back to reference Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. New York University Press, New YorkCrossRef Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. New York University Press, New YorkCrossRef
go back to reference O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New YorkMATH O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New YorkMATH
go back to reference Overdorf R, Kulynych B, Balsa E, Troncoso C, Gürse S (2018) Questioning the assumptions behind fairness solutions. ArXiv:1811.11293. Retrieved March 11, 2021 Overdorf R, Kulynych B, Balsa E, Troncoso C, Gürse S (2018) Questioning the assumptions behind fairness solutions. ArXiv:​1811.​11293. Retrieved March 11, 2021
go back to reference Pariser E (2011) The filter bubble. Penguin, New York Pariser E (2011) The filter bubble. Penguin, New York
go back to reference Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, CambridgeCrossRef Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, CambridgeCrossRef
go back to reference Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, pp 5684–5693. Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, pp 5684–5693.
go back to reference Sangiovanni A (2017) Humanity without dignity. Moral equality, respect, and human rights. Harvard University Press, Cambridge Sangiovanni A (2017) Humanity without dignity. Moral equality, respect, and human rights. Harvard University Press, Cambridge
go back to reference Selbst AD, Boyd D, Friedler AS, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 59–68. ACM Press. Atlanta, GA, USA: https://doi.org/10.1145/3287560.3287598. Selbst AD, Boyd D, Friedler AS, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 59–68. ACM Press. Atlanta, GA, USA: https://​doi.​org/​10.​1145/​3287560.​3287598.
go back to reference Shelby T (2016) Dark ghettos: injustice, dissent, and reform. Harvard University Press, CambridgeCrossRef Shelby T (2016) Dark ghettos: injustice, dissent, and reform. Harvard University Press, CambridgeCrossRef
go back to reference Umbrello S (2020) Imaginative value sensitive design: using moral imagination theory to inform responsible technology design. Sci Eng Ethics 26(2):575–595CrossRef Umbrello S (2020) Imaginative value sensitive design: using moral imagination theory to inform responsible technology design. Sci Eng Ethics 26(2):575–595CrossRef
go back to reference Van den Hoven J, Vermaas PE, van de Poel I (2015) Handbook of ethics, values, and technological design. Sources, theory, values and application domains. Springer. ISBN: 978-94-007-6969-4 Van den Hoven J, Vermaas PE, van de Poel I (2015) Handbook of ethics, values, and technological design. Sources, theory, values and application domains. Springer. ISBN: 978-94-007-6969-4
go back to reference Waldron J (2017) One another’s equal. The basis of human equality. Harvard University Press, CambridgeCrossRef Waldron J (2017) One another’s equal. The basis of human equality. Harvard University Press, CambridgeCrossRef
go back to reference Williams B (1981) Persons, character and morality. Moral Luck: Philosophical papers 1973–1980. Cambridge University Press, Cambridge, pp 1–19CrossRef Williams B (1981) Persons, character and morality. Moral Luck: Philosophical papers 1973–1980. Cambridge University Press, Cambridge, pp 1–19CrossRef
go back to reference Wolff J (2010) Fairness, respect, and the egalitarian “ethos” revisited. J Ethics 14(3/4):335–350CrossRef Wolff J (2010) Fairness, respect, and the egalitarian “ethos” revisited. J Ethics 14(3/4):335–350CrossRef
go back to reference Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2015) Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259. Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2015) Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:​1507.​05259.
Metadata
Title
Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms
Authors
Benedetta Giovanola
Simona Tiribelli
Publication date
21-05-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 2/2023
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01455-6

Other articles of this Issue 2/2023

AI & SOCIETY 2/2023 Go to the issue

Premium Partner