Skip to main content

Advertisement

Log in

Artificial Intelligence: the right to protection from discrimination caused by algorithms, machine learning and automated decision-making

  • Article
  • Published:
ERA Forum Aims and scope

Abstract

An analysis of how Artificial Intelligence, ML algorithms and automated decision-making can give rise to discrimination and the ways in which Europe’s existing equality framework can regulate any inequality whilst also identifying how it must change to meet the challenges ahead. The authors also examine some of the ways in which the GDPR impacts on Artificial Intelligence, ML algorithms and automated decision making.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See “A Union that strives for more—My agenda for Europe”, at [3]; see https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf.

  2. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/.

  3. This definition is taken directly from the European Commission in its Communication of AI. The Commission goes on to explain that, “AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).” A detailed paper exploring the meaning of AI produced by the European Commission’s High-Level Expert Group on Artificial Intelligence is available here: https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines.

  4. https://iapp.org/news/a/the-privacy-pros-guide-to-explainability-in-machine-learning/.

  5. Babuta, A., Oswald, M., Rinik, C.: ML Algorithms and Police Decision-Making: Legal, Ethical and Regulatory Challenges, RUSI, September 2018, available here: https://rusi.org/sites/default/files/20180329_rusi_newsbrief_vol.38_no.2_babuta_web.pdf.

  6. Oswald, M., Grace, J., Urwin, S., Barnes, G.C.: Information & Communications Technology Law, 27:2, 223-250, https://doi.org/10.1080/13600834.2018.1458455. The article is available at https://doi.org/10.1080/13600834.2018.1458455.

  7. Burrell, J.: How the machine ‘thinks’: Understanding opacity in machine learning algorithms, 2016, available here: https://journals.sagepub.com/doi/10.1177/2053951715622512.

  8. A comprehensive review of the use of this type of “predictive policing” technology is outlined in Hannah Couchman’s report produced for Liberty entitled, “Policing by Machine—Predictive Policing and the Threat to Our Rights” and is available here: https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf.

  9. The UK Law Society’s June 2019 report entitled, “Algorithms in the Criminal Justice System”, provides a sense of the scale of this technology with the UK, https://www.lawsociety.org.uk/support-services/research-trends/algorithm-use-in-the-criminal-justice-system-report/.

  10. Buolamwini, J., Gebru, T.: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77–91, 2018: Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, which is available at http://proceedings.mlr.press/v81/buolamwini18a.html.

  11. Fussey, P., Murray, D.: Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology, July 2019, available here: https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report.pdf.

  12. https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf.

  13. That is to say “body of common rights and obligations that are binding on all EU countries, as EU Members: https://eur-lex.europa.eu/summary/glossary/acquis.html.

  14. Article 19 TFEU now says “1. Without prejudice to the other provisions of the Treaties and within the limits of the powers conferred by them upon the Union, the Council, acting unanimously in accordance with a special legislative procedure and after obtaining the consent of the European Parliament, may take appropriate action to combat discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation.”

  15. Directive 2000/43/EC (6) implementing the principle of equal treatment between persons irrespective of racial or ethnic origin and Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation.

  16. Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast).

  17. ECLI:EU:C:2005:709.

  18. ECLI:EU:C:2010:21, see [20]–[22].

  19. ECLI:EU:C:2016:278.

  20. See for instance C-395/15 Daouidi ECLI:EU:C:2016:917.

  21. See for instance https://www.equalitylaw.eu/publications/comparative-analyses.

  22. See, for example, see the UK case of \(R\) (on the application of Coll) v Secretary of State for Justice [2017] UKSC 40.

  23. Wachter-Boettcher, S.: Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.

  24. Ibid.

  25. Ibid.

  26. https://www.technologyreview.com/s/613898/an-ai-app-that-undressed-women-shows-how-deepfakes-harm-the-most-vulnerable/.

  27. EU:C:2008:415.

  28. EU:C:2015:480.

  29. EU:C:1986:204.

  30. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top.

  31. https://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html.

  32. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679.

  33. https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.

  34. https://zenodo.org/record/3237865#.XTGSTPJKhQI.

  35. Case C-109/88, Handels-og Kontorfunktionaerernes Forbund i Danmark v Dansk Arbejdsgiverforening Exp. Danfoss A/\(S\) EU:C:1989:383.

  36. https://www.technologyreview.com/f/613560/google-shows-how-ai-might-detect-lung-cancer-faster-and-more-reliably/.

  37. Ardila, Kiraly, Bharadwaj, Choi, Reicher, Peng, Tse, Etemadi, Ye, Corrado, Naidich, Shetty: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography, available here: https://www.nature.com/articles/s41591-019-0447-x.

  38. http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robin Allen.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Robin Allen QC and Dee Masters, barristers, joint co-founders of AI Law Hub (www.ai-lawhub.com) and members of Cloisters chambers (www.cloisters.com).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Allen, R., Masters, D. Artificial Intelligence: the right to protection from discrimination caused by algorithms, machine learning and automated decision-making. ERA Forum 20, 585–598 (2020). https://doi.org/10.1007/s12027-019-00582-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12027-019-00582-w

Keywords

Navigation