Zum Inhalt

2023 | Buch

Artificial Intelligence and Normative Challenges

International and Comparative Legal Perspectives

herausgegeben von: Angelos Kornilakis, Georgios Nouskalis, Vassilis Pergantis, Themistoklis Tzimas

Verlag: Springer International Publishing

Buchreihe : Law, Governance and Technology Series

insite
SUCHEN

Über dieses Buch

Künstliche Intelligenz (KI) - sowohl in ihrer gegenwärtigen, vergleichsweise begrenzten Form als auch in ihren potenziellen zukünftigen Formen (wie allgemeiner und Superintelligenz) - hat sowohl Sorgen als auch Hoffnungen geweckt. Seine tatsächlichen und potenziellen Folgen sind zunehmend weitreichender und betreffen fast alle Facetten des menschlichen Lebens auf kollektiver und individueller Ebene: Von der Nutzung von Mobiltelefonen und sozialen Medien bis hin zu autonomen Waffen und von der Digitalisierung von Wissen und Information bis hin zur Patentierbarkeit von KI-Innovationen stellen sich unerwartete philosophische, ontologische, politische und rechtliche Fragen. Dieses Buch bietet einen aufschlussreichen und wesentlichen Leitfaden zu den wissenschaftlichen Fragen, die die Gegenwart und Zukunft der Menschheit prägen. Mit einer Sammlung von wissenschaftlichen Aufsätzen prominenter Wissenschaftler befasst sich das Buch mit den wichtigsten rechtlichen Fragen im Zusammenhang mit künstlicher Intelligenz: ihren Auswirkungen auf ein breites Spektrum menschlichen Verhaltens und der allgemeinen rechtlichen Antwort, einschließlich Fragen zur künstlichen Intelligenz und zur Rechtspersönlichkeit; Verantwortung, Haftung und Schuldfähigkeit im Zeitalter künstlicher Intelligenz; den Herausforderungen, die künstliche Intelligenz für Systeme des geistigen Eigentums darstellt; Herausforderungen im Bereich der Menschenrechte und den Auswirkungen künstlicher Intelligenz auf jus ad bellum und jus in bello.Angesichts seines Umfangs wird das Buch Forscher, Wissenschaftler und Praktiker ansprechen, die einen Leitfaden für diese sich rasch wandelnde Landschaft suchen.

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
Artificial Intelligence (AI)—both in its current narrow form and even more in its potential future forms (such as General and Super Intelligence)—has raised concerns as well as expectations. Its actual and potential consequences are ever-expanding and apply to almost every facet of human life at the collective and individual levels. From the use of mobile phones and social media to autonomous weapons, from digitalization of knowledge and information to patent-eligibility for AI innovations, unexpected, until recently, philosophical, ontological, political, and legal issues arise.
Angelos Kornilakis, Georgios Nouskalis, Vassilis Pergantis, Themistoklis Tzimas

AI and Questions of Personhood and Ethics

Frontmatter
The Peculium of the Robot: Artificial Intelligence and Slave Law
Abstract
The chapter discusses the analogy between the status of contemporary types of artificial intelligence and that of ancient slaves under Roman law, in order to evaluate the theorical as well as applied perspectives of the recognition of some degree of juridical subjectivity to artificial intelligences in the field of patrimonial, but also non-patrimonial, legal relationships.
Marco Rizzuti
Legal Personhood for Autonomous AI: Practical Consequences in Private Law
Abstract
The chapter examines the option of recognizing legal personality to autonomous AI machines from the perspective of private law, focusing on practical challenges. In particular, the chapter shall examine the scope and practical implications of recognizing legal personality to autonomous AI systems regarding: (a) the estate and the liability of the AI (including vicarious liability), (b) the right of personality and its protection, (c) the capacity to conclude binding and legally enforceable contracts as a contractual party and the application of contract law including the law of mistake and agency, (d) the question of identification and legal residence. Furthermore, it shall try to point out the interdependence between the legal status of AI and that of its user and the complexities arising from it.
Angelos Kornilakis
Artificial Intelligence’s Black Box: Posing New Ethical and Legal Challenges on Modern Societies
Abstract
Artificial Intelligence has been proven to be one of the most influential scientific fields in today’s business world since its technological breakthroughs play an ever-increasing role in various sectors of modern life and transactions. Nonetheless, concerns are raised about the possible adverse effects they may have on individuals and society, given that various incidents of human rights violations, during—and due to—the operation of the so-called autonomous AI systems, have already been noticed. This ‘negative’ aspect of AI systems is attributed to the so-called “black box problem” or “black-box effect”, which constitutes an inherent limitation of AI challenging AI’s further evolution and public acceptance and sparking a lively debate in scientific community about the potential tools for counteracting it. The present paper aims at shedding light on the “new” legal and ethical challenges that AI poses for modern societies. First, the paper introduces the concept of AI “opacity” and examines certain reasons for it. Subsequently, it presents several incidents of violation of human rights taken place due to AI systems in various sectors, including the job market, the banking sector, (private) insurances, justice, transactions, art, and transportations. The paper concludes with some of the most important recommended guiding principles for counteracting the black box effect of AI and defying the new legal and ethical challenges posed by AI.
Vasiliki Papadouli

AI and Civil Liability

Frontmatter
The Role of the Autonomous Machines at the Conclusion of a Contract: Contractual Responsibility According to Current Rules of Private Law and Prospects
Abstract
Nowadays, one of the most important applications of autonomous AI systems is the conclusion of contracts. Nonetheless, concerns are raised about the validity of the contracts concluded by autonomous machines and, subsequently, about the contractual responsibility in case of non-performance. Various theories have already been expounded in legal doctrine, with the view to tackling thereon. Some of them suggest that autonomous AI systems are mere communication tools or agents that render their user liable, whilst other legal scholars suggest that autonomous AI systems themselves—not their user—should be held liable. After presenting the arguments of these theories, the chapter concludes that the legal community should absolutely accept the validity of the contracts concluded by intelligent agents, considering their users legally bound to their performance. Users’ liability could be based on the theory of de facto contracts (faktische Verträge) or, alternatively, on the doctrine of reliance liability (Vertrauenshaftung). In both cases, users’ right to invalidate the contract in case of mistake must be guaranteed and the mistake shall be assigned to the intelligent agent.
Vasiliki Papadouli
Understanding the Risks of Artificial Intelligence as a Precondition for Sound Liability Regulation
Abstract
Not all AI risks are new. Risk of traffic accidents that self-driving cars generate is already reality in today’s traffic. Physical injuries a patient may suffer during a medical treatment occur regardless of whether the damage is caused by an autonomous agent or a human doctor. Modern societies are already familiar with the aforementioned risks. This chapter explores whether liability regimes, traditionally designed to deter physical risks and compensate an injured person when they occur, have rules apt for tackling the social risks that AI represents. In the European Union, the European Parliament has adopted a text of a Regulation on AI liability. The text is a clear step forward in adjusting liability rules to the challenges of AI. It sets out a position on who should be responsible and on what basis and provides injured persons with procedural devices in order to enhance their position and tackle the black-box issue. It, thus, for better or worse, deals with well-known fundamental issues surrounding AI liability. However, while social risks have been previously recognized by the European Commission in the White Paper and by some scholars, the adopted text omits to address them specifically. This chapter presents the nature of AI risks that liability rules should regulate. It seeks to address whether the traditional liability concepts are apt for regulating the novel types of risks. Just like in the case of safety regulation, this chapter attempts to demonstrate that a proper understanding of AI risks is a basis for sound regulation of liability.
Nasir Muftic

AI and Issues of Responsibility and Adjudication

Frontmatter
Attributing Conduct of Autonomous Software Agents with Legal Personality under International Law on State Responsibility
Abstract
This chapter considers attribution under international law on state responsibility in relation to the conduct of autonomous software agents (ASAs) with legal personality that are used to perform a state’s cyber security functions. It examines the extent to which existing international law is applicable to wrongful conduct by ASAs, and the extent to which the technical autonomy and separate legal status of these entities problematizes the application of the law. Overall, it is argued that ASAs as legal entities are conceptually compatible with existing law, and that even where these entities are legally distinct from the human agents of the state, the link between these entities and the human beings responsible for their creation is sufficient to establish attribution under the law on state responsibility.
Samuli Haataja
Algorithmic Criminal Justice: Is It Just a Science Fiction Plot Idea?
Abstract
This chapter examines the use of algorithms in the realm of criminal justice (known as algorithmic criminal justice) and the potential paradigm shift towards pre-emption-driven decision-making. It contributes to debates about the increasing role of enabling technologies in understanding and responding to crime by turning the spotlight on criminal proceedings. It argues that, at first sight, algorithmic decision-making tools may present a strong potential to improve the operational efficiency of criminal justice authorities, but their use remains associated with hard-to-solve challenges, ranging from lack of transparency to questionable compatibility with core principles of substantive and procedural criminal law. Finally, it highlights the need for a balanced dialogue at the crossroads of technological novelty and (criminal) justice.
Athina Sachoulidou

Intellectual Property Protection and Patentability of AI

Frontmatter
The Patentability of AI-Related Subject Matter According to the EPC as Implemented by the EPO
Abstract
This chapter comments upon the necessity and the feasibility of patenting AI-related subject matter, considering AI’s distinguishing features, i.e., its technical complexity, autonomy, and self-evolving capacities. These issues are examined in light of the European legal order. The investigation endeavours to identify grounds advocating for a change of tack on the handling of AI patent applications within the European Patent System.
Eleni Tzoulia
International Perspectives on Regulatory Frameworks: AI Through the Lens of Patent Law
Abstract
This chapter considers the approach to regulating artificial intelligence (AI) from a patent law perspective, exploring how existing forms of regulation can help inform our stance vis-à-vis AI. The focus is on contextualising the developments in software and pharmaceutical regulation through patent law, and then applying these lessons to AI. The EU, US and Japan all represent different approaches to the regulation of AI, and this diversity already impacts the existing relationship between AI and the patent system. The chapter concludes by recommending that self-regulation for AI inventions (but supported by robust systems of data protection) will be important in encouraging the commercial growth of AI. The patent system represents an important normative filter for AI, but the experience of software and pharmaceuticals in patent law highlights how it is more successful at excluding the most extreme iterations of a technology.
David Tilt

AI and Human Rights

Frontmatter
What Role for Social Rights During the Leap to Post or “Enhanced” Humanism?
Abstract
The chapter examines the relationship between posthumanism and social rights. It identifies the potential impact of the path towards posthumanism on social cohesion. On such grounds, the role of social rights, as guarantors of maximum social equality is addressed. More specifically, the rights to science and health are analyzed in relation to posthumanism. Social rights are approached as binding norms, within the framework of the human rights legal regime. The case for a cooperative and centralized model of governance is made and, in this context, the significance of international law is presented.
Themistoklis Tzimas
Artificial Intelligence vs Data Protection: How the GDPR Can Help to Develop a Precautionary Regulatory Approach to AI?
Abstract
In the chapter I analyse how experience drawn from the implementation of data protection impact assessment (DPIA) to the General Data Protection Regulation (GDPR) can be helpful when developing a precautionary regulatory approach to artificial intelligence (AI) in the European Union’s (EU) law. I begin with the examination of the shortcomings of the regulatory framework concerning DPIA. Next, I move on to the analysis of the regulatory proposal concerning AI, presented by the Commission in April 2021. The proposal strengthens the precautionary approach to the regulation of new technologies in EU law by e.g., implementation of the category of an unacceptable risk. However, to guarantee effective precautionary approach to the regulation of AI in the AI Act, it is necessary to learn the lesson from the shortcomings which became evident in the case of the GDPR’s provisions concerning DPIA.
Joanna Mazur

AI and Jus ad Bellum-Jus in Bello Questions

Frontmatter
The Use of AI Weapons in Outer Space: Regulatory Challenges
Abstract
The new era in space exploitation is characterized by the rise of artificial intelligence (AI) multiplying the capabilities of space systems, and the growing awareness of (space) environmental constraints. It is already apparent that the inevitable consequence of these developments will be the growth of economic and political tensions. The need to ensure the continuity of benefits derived on Earth from space systems—and to protect space assets—seems to be more or less related to the growing debate on the militarisation and weaponization of space. In this context, AI and Autonomous Weapon Systems (AWS) will most probably be harnessed and the laws possibly applying to potential conflicts in space will have to be  discussed, taking into account the characteristics of these new technologies. As the UN Charter rules on the use of force apply expressis verbis in space, pursuant to Art. III of the Outer Space Treaty (OST), this chapter will primarily examine under what conditions States could use AWS in space, i.e., where it is considered that a threat to the peace (Art. 39 of the UN Charter) and/or an armed attack to a UN member State (Art. 51 of the UN Charter) are effectively occurring in this particular environment (jus ad bellum). Following this analysis, the specific operating mode of AWS potentially used in the context of a space war will be addressed, to clarify whether such activity could be in line with the rule established in Art IV of the OST, as well as with international humanitarian law (jus in bello).
Anthi Koskina
Performance or Explainability? A Law of Armed Conflict Perspective
Abstract
Machine learning techniques lie at the centre of many recent advancements in artificial intelligence (AI), including in weapon systems. While powerful, these techniques utilise opaque models whose internal workings are generally quite difficult to explain, which necessitated the development of explainable AI (XAI). In the military domain, both performance and explainability are important and legally required by international humanitarian law (IHL). In practice, however, these two desiderata are in conflict, as improving explainability may involve paying an opportunity cost in performance and vice versa. It is unclear how IHL requires States to address this dilemma. In this article, we attempt to operationalise normative IHL requirements in terms of P (performance) and X (explainability) to derive qualitative guidelines for decision-makers on this issue. We first explain the explainability-performance trade-off, what causes it, and what its consequences are. Then, we explore relevant IHL principles that include P and X as requirements, and develop four tenets derived from these principles. We demonstrate how IHL prescribes minimum values for both P and X, but that once these values are achieved, P should be prioritised over X. We conclude by formulating a general guideline and provide an example of how this would impact model choice.
Jonathan Kwik, Tom van Engers
Metadaten
Titel
Artificial Intelligence and Normative Challenges
herausgegeben von
Angelos Kornilakis
Georgios Nouskalis
Vassilis Pergantis
Themistoklis Tzimas
Copyright-Jahr
2023
Electronic ISBN
978-3-031-41081-9
Print ISBN
978-3-031-41080-2
DOI
https://doi.org/10.1007/978-3-031-41081-9