Skip to main content
Top

2023 | Book

Artificial Intelligence, Social Harms and Human Rights

insite
SEARCH

About this book

T​his book critically explores how and to what extent artificial intelligence (AI) can infringe human rights and/or lead to socially harmful consequences and how to avoid these. The European Union has outlined how it will use big data, machine learning, and AI to tackle a number of inherently social problems, including poverty, climate change, social inequality and criminality. The contributors of this book argue that the developments in AI must take place in an appropriate legal and ethical framework and they make recommendations to ensure that harm and human rights violations are avoided. The book is split into two parts: the first addresses human rights violations and harms that may occur in relation to AI in different domains (e.g. border control, surveillance, facial recognition) and the second part offers recommendations to address these issues. It draws on interdisciplinary research and speaks to policy-makers and criminologists, sociologists, scholars in STS studies, security studies scholars and legal scholars.

Table of Contents

Frontmatter

AI in Different Domains: AI, Repression and Crime

Frontmatter
Chapter 1. Artificial Intelligence and Sentencing from a Human Rights Perspective
Abstract
The development of Artificial Intelligence (AI) is still in its infancy, but its potentials and dangers are discussed controversially. The legal sector will be affected by the AI-driven technological evolution, too. There are possible advantages of the use of AI in this context: sentencing decisions might become more uniform and consistent, proceedings shorter and less expensive, human judges might be relieved from workload and could focus more on severe and complex criminal law cases. In a nutshell: the functional capability of the criminal justice system might benefit from an “algorithmic boost” of efficiency. The temptation to compensate the problem of limited judicial resources and procedural delays by using machines with almost infinite working capacity might become irresistible. From a human rights perspective, however, it is questionable if “robot judges” assisting or even replacing human judges would be permissible in criminal law with its grave consequences for the individual defendant´s life. We will address this question with regard to the European Convention of Human Rights (ECHR), analyzing the ban of inhuman or degrading treatment (Art. 3 ECHR), the principle of fair trial (Art. 6 (1) ECHR), the principle of legality (Art. 7 ECHR), the protection of privacy (Art. 8 ECHR) and the ban of discrimination (Art. 14 ECHR)—bearing in mind that the outcome of a legal assessment strongly depends on the (hitherto unclear) concrete shape AI-based sentencing systems might take in the future. Nonetheless, we will also outline potential countermeasures like the use of explainable, transparent AI. The article concludes with a plea for a robust legal culture that is focused on improving sentencing practice through processes of deliberation and experimentation (which might also include technological experiments) rather than replacing it with technology solutions that put humans increasingly out of the loop.
Johannes Kaspar, Stefan Harrendorf, Felix Butz, Katrin Höffler, Lucia Sommerer, Stephan Christoph
Chapter 2. Technical and Legal Challenges of the Use of Automated Facial Recognition Technologies for Law Enforcement and Forensic Purposes
Abstract
Biometrics covers a variety of technologies used for the identification and authentication of individuals based on their behavioral and biological characteristics. A number of new biometric technologies have been developed, taking advantage of our improved understanding of the human body and advanced sensing techniques. They are increasingly being automated to eliminate the need for human verification. As computational power and techniques improve and the resolution of camera images increases, it seems clear that many benefits could be derived through the application of a wider range of biometric techniques for security and surveillance purposes in Europe. Facial recognition technology (FRT) makes it possible to compare digital facial images to determine whether they are of the same person. However, there are many difficulties in using such evidence to secure convictions in criminal cases. Some are related to the technical shortcomings of facial biometric systems, which impact their utility as an undisputed identification system and as reliable evidence; others pertain to legal challenges in terms of data privacy and dignity rights. While FRT is coveted as a mechanism to address the perceived need for increased security, there are concerns that the absence of sufficiently stringent regulations endangers fundamental rights to human dignity and privacy. In fact, its use presents a unique host of legal and ethical concerns. The lack of both transparency and lawfulness in the acquisition, processing and use of personal data can lead to physical, tangible and intangible damages, such as identity theft, discrimination or identity fraud, with serious personal, economic or social consequences. Evidence obtained by unlawful means can also be subject to challenge when adduced in court. This paper looks at the technical and legal challenges of automated FRT, focusing on its use for law enforcement and forensic purposes in criminal matters. The combination of both technical and legal approaches is necessary to recognize and identify the main potential risks arising from the use of FRT, in order to prevent possible errors or misuses due both to technological misassumptions and threats to fundamental rights, particularly—but not only—the right to privacy and the presumption of innocence. On the one hand, a good part of the controversies and contingencies surrounding the credibility and reliability of automated FRT is intimately related to their technical shortcomings. On the other hand, data protection, database custody, transparency, accountability and trust are relevant legal issues that might raise problems when using FRT. The aim of this paper is to improve the usefulness of automated FRT in criminal investigations and as forensic evidence within the criminal procedure.
Patricia Faraldo Cabana

AI in Different Domains: Impacts of AI on Specific Rights

Frontmatter
Chapter 3. Artificial Intelligence, International Law and the Race for Killer Robots in Modern Warfare
Abstract
Artificial intelligence is not restricted to science fiction. Over the past decade, the human dependency on data gathering devices has led to artificial intelligence impacting everyday life. Artificial intelligence is commonplace in facial recognition, search engines, data gathering tools and, more worryingly, autonomous weapons. The use of artificial intelligence by the state and by communication companies brings with it new legal and ethical concerns in which international law is struggling to regulate. There is a distinct lack of any legal framework around the use of artificial intelligence in conflict and modern warfare. The international legal community, states and communication companies are rightly concerned about the implications of the development of fully autonomous weapons systems which have no human input and, therefore, the militarisation of artificial intelligence. These machines or ‘killer robots’ will be capable of making autonomous decisions in times of conflict. Despite these concerns and mirroring the race for nuclear weapons during the Cold War era, states are locked in an increasing worrying race for autonomous killing machines. The use of autonomous weapons will shape modern warfare for decades to come. This chapter will focus on the use of artificial intelligence technology in automated weapons and in particular, the use of drones during conflict and the threat their usage has on human life without a restrictive international legal framework.
Kristian Humble
Chapter 4. Artificial Intelligence and the Prohibition of Discrimination in the EU: A Private Law Perspective
Abstract
The increasingly widespread use of AI tools in various stages of a contract’s life cycle has brought many challenges, including human rights protection. Discriminatory practices have been detected in many areas of private law where algorithms are used in the selection or decision-making process (e.g. in the context of loan financing, marketing, employment, and insurance). By looking into the EU legal framework, this chapter aims to analyse the selected instances of discriminatory practices caused by AI systems that occur in horizontal relationships (i.e. relationships between private individuals). More precisely, it focuses on two major fields of private law where the EU offers protection against discrimination, that is, employment matters and access to and the supply of goods and services. Although EU Member States may provide a higher level of protection in their national laws, this analysis takes a supranational approach and focuses exclusively on the protection guaranteed by EU law.
Karmen Lutman

Policy, Regulation, Governance: AI and Ethics

Frontmatter
Chapter 5. In Defence of Ethics and the Law in AI Governance: The Case of Computer Vision
Abstract
The chapter examines the intersection of the legal and ethical compliance of AI systems in the R&D domain. It first offers insights into the various forms of harm of AI and the awareness of the engineering community about these harms. Then, it shows the evolution of AI governance as a specific field of governance of ICT, in which ethics has obtained a prominent policy role and how the trend has gone from the “race to AI” to the rush to “AI ethics” and onwards and upwards to the “race for the governance of AI”. Lastly, it narrows down the focus on the relationship between ethics and law in a case study of legal and ethical assessments of access, collection and other types of processing of personal data for the purposes of computer vision. It shows how the tensions between ethics and the law exist more at a surface and abstract level, while they are complementary in mitigating the potential negative societal and individual harms of AI. While ethics entering the field of AI governance references the law (e.g. human rights), this chapter shows that the opposite is also the case—the law makes references to ethics to support the law as the research “deserving” the legal scientific research exception under GDPR is the only activity that takes place incorporating the appropriate standards of methodology and ethics.
Aleš Završnik
Chapter 6. What Role for Ethics in the Law of AI?
Abstract
The aim of the chapter is to explore the broader scope of the Ethics Guidelines for Trustworthy AI. In particular, the chapter focuses on the reasons that led the EU to develop an ethical approach to AI, seeking to investigate to what extent it is arguable that the ethical principles for a trustworthy AI should be based on the compliance with fundamental rights. It points out that the symbolic value of fundamental rights as embedded within this non-binding tool, shows the normative vision of the EU, mitigating the possible conflict between institutional and private actors involved and their related interests. It argues that neither the ethical approach nor the mere legal design of AI can effectively address the issue of algorithmic inferences and their impact on individuals and society. Finally, it seeks to contextualize the rationales of the Ethics Guidelines within the core issues of the Proposal for AI Regulation (Artificial Intelligence Act), investigating commonalities and differences between these two regulatory approaches.
Mariavittoria Catanzariti
Chapter 7. Introduction to Computational Ethics
Abstract
Computational ethics is a field of artificial intelligence (AI) that studies algorithms for computation of ethical decisions. The practical aspect of the field is that it provides computer scientists and engineers with means for implementing artificial agents and intelligent systems capable of taking ethically permissible decisions. In this chapter, we aim at introducing the field of computational ethics by providing an overview of the typical computational approaches to ethical decision-making. The chapter illustrates the application of the presented approaches to resolving various ethical dilemmas and emphasizes their utility and limitations. We conclude the chapter by providing a glimpse into the open venues for further research in computational ethics.
Ljupčo Todorovski

Policy, Regulation, Governance: AI and Harm Prevention

Frontmatter
Chapter 8. Artificial Intelligence and Human Rights: Corporate Responsibility Under International Human Rights Law
Abstract
Private businesses are key drivers in the development of artificial intelligence (AI), whether in the field of criminal justice, financial fraud, the provision of essential public services or recruitment, to name a few. Due to their central position in the creation of AI, businesses play a crucial role in ensuring that AI is human-centric and respects human rights. However, while many guidelines and principles in AI ethics address the role of businesses developing AI, the position of businesses under international human rights law remains somewhat unclear in this context. The current international human rights law framework was developed in the aftermath of the second world war, during which time focus was placed on the protection of individuals from States, rather than private businesses. Furthermore, the incredible leaps in technological developments that have occurred since this time were not envisaged by the drafters of international human rights law. While developments have certainly been made with the adoption of (non-legally binding) international standards to protect human rights from private businesses, crucial questions regarding the specific role and responsibilities of businesses developing AI remain to be answered here: What human rights responsibilities do businesses developing AI currently have under international human rights law? What standards exist to elucidate these, and what is the legal status of these standards? What shortcomings and challenges exist in this regard, and how can we move forwards to ensure AI that respects human rights? This chapter seeks to answer these questions. First, the chapter briefly exemplifies the negative impact AI developed by private businesses can have on human rights, such as causing discriminatory access to goods and services. Next, the general legal framework regarding businesses’ responsibilities under international human rights law is laid down and applied to the development of AI in order to identify more specific standards of behaviour expected from businesses in this context and to identify key challenges in achieving their implementation.
Lottie Lane
Chapter 9. As Above so Below: The Use of International Space Law as an Inspiration for Terrestrial AI Regulation to Maximize Harm Prevention
Abstract
Artificial intelligence (AI) is becoming an integral part of technologies aimed at preventing harm, both on Earth and in outer space. However, there exists no comprehensive legal framework governing the use of AI, as international law is yet to regulate this emerging field. The majority of the currently existing standards for AI regulation were adopted as soft-law or ethical guidelines, adopted by various different subjects and therefore varying in content and format. This chapter will argue that in the process of transforming such non-binding guidelines into a binding coherent international legal framework for AI, certain space law provisions, in particular, those found in the fundamental Outer Space Treaty and the subsequent Liability Convention, could serve as an inspiration, in order to ensure that the international framework for AI will be aimed at preventing harm to the greatest extent possible. It will accomplish this by first illustrating the current uses of AI solutions in preventing harm on Earth and in outer space (which correspond to the term narrow AI), as well as some examples of technology under development, which is planned to reach greater or even complete autonomy (strong AI). Secondly, it will provide an extraction of some basic soft-law and ethical principles, that are most often found in the regulations governing the use of AI on Earth, before proceeding to relevant space law principles which are guiding the use of AI in outer space. Lastly, it will compare the two categories—the existing general guidelines and corresponding space law provisions, to examine how the latter could serve as a good example in the process of concretizing the existing general principles while translating them into a comprehensive binding legal framework on Earth aimed at preventing harm and maximizing social benefits, in line with what the authors named the “as above so below” approach.
Iva Ramuš Cvetkovič, Marko Drobnjak
Chapter 10. Democratizing the Governance of AI: From Big Tech Monopolies to Cooperatives
Abstract
Today, artificial intelligence (AI) solutions are so ubiquitous in many parts of the world, that many individuals are blissfully unaware of how much they rely on them in their everyday life (European Commission 2017). AI is, for example, used in the public domain in education and taxation processes, in the context of smart cities, judicial systems, election campaigns as well as in insurance, banking, and other business sectors.
Katja Simončič, Tonja Jerele
Backmatter
Metadata
Title
Artificial Intelligence, Social Harms and Human Rights
Editors
Aleš Završnik
Katja Simončič
Copyright Year
2023
Electronic ISBN
978-3-031-19149-7
Print ISBN
978-3-031-19148-0
DOI
https://doi.org/10.1007/978-3-031-19149-7

Premium Partners