Skip to main content

2024 | Buch

Legal Aspects of Autonomous Systems

A Comparative Approach

insite
SUCHEN

Über dieses Buch

As computational power, the volume of available data, IT systems’ autonomy, and the human-like capabilities of machines increase, robots and AI systems have substantial and growing implications for the law and raise a host of challenges to current legal doctrines. The main question to be answered is whether the foundations and general principles of private law and criminal law offer a functional and adaptive legal framework for the “autonomous systems” phenomena.

The main purpose of this book is to identify and explore possible trajectories for the development of civil and criminal liability; for our understanding of the attribution link to autonomous systems; and, in particular, for the punishment of unlawful conduct in connection with their operation. AI decision-making processes – including judicial sentencing – also warrant close attention in this regard.

Since AI is moving faster than the process of regulatory recalibration, this book provides valuable insights on its redesign and on the harmonization, at the European level, of the current regulatory frameworks, in order to keep pace with technological changes.

Providing a broader and more comprehensive picture of the legal challenges posed by autonomous systems, this book covers a wide range of topics, including the regulation of autonomous vehicles, data protection and governance, personality rights, intellectual property, corporate governance, and contract conclusion and termination issues arising from automated decisions, blockchain technology and AI applications, particularly in the banking and finance sectors.

The authors are legal experts from around the world with extensive academic and/or practical experience in these areas.

Inhaltsverzeichnis

Frontmatter

Autonomous Systems and Civil Liability

Frontmatter
Autonomous Systems and Tort Law
Abstract
At the present stage of their development, as the European Law Institute pointed out, algorithms have five main characteristics: complexity, increasing autonomy, opacity, openness and vulnerability. Being able to learn through cumulative experience and to take independent decisions and standing as true ecosystems of connected elements, the autonomous systems pose a challenge for the classic tort law remedies. Thus, if damage or harm caused by an autonomous system occurs, who can be liable? In this paper, after showing the difficulties of the classic remedies, we will consider some possible solutions for this problem, examining three separate solutions: the establishment of a fund for the compensation of AI harm; the direct liability of the autonomous systems; and the establishment of a new hypothesis of strict liability.
Mafalda Miranda Barbosa
Violation of the Right to Be Forgotten on the Internet: Legal Overview of Tort Law Aspects
Abstract
The current digital revolution, in particular the Big data phenomenon, has raised new legal challenges in accordance with the demands of the social and democratic rule of law to adapt its legal order to the new scenario. This scenario has led to the creation of the right to be forgotten in response to citizen demands regarding the potential violation of the right to privacy entailed by the storage, processing and mass transfer of personal information. Against this background, the GDPR regulates several pathways through which a person can gain custody of their right to be forgotten, in that whoever suffers harm as a consequence of an infraction of its provisions will be entitled to receive compensation for both patrimonial and moral damage. However, the GDPR does not create a system of either strict or subjective liability related to the damage and this raises a number of legal issues in terms of achieving effective compensation for the affected party and also begs the question of whether the right to be forgotten is an efficient mechanism to resolve these new legal issues.
Marina Sancho López
Suppliers’ Civil Liability for Damage Caused by Autonomous Vehicles: A Brazilian Perspective
Abstract
This essay aims to analyze how Brazilian civil law interprets the suppliers’ responsibility for damage caused by autonomous vehicles. The study starts with the description of two hypothetical accident scenarios involving autonomous vehicles, and then seeks to identify the grounds that most aptly establish supplier liability in each context. The four grounds for liability referred to herein are (i) Articles 12 and 14 of the Consumer Defense Code, (ii) the sole paragraph of Article 927 of the Civil Code, (iii) Article 931 of the Civil Code and (iv) the risk-development theory. Once the four paths are applied to the two hypothetical incidents, the conclusion reached is that although the provisions considered do offer an overview of the “current state of the art” in Brazilian law regarding liability, they fail to provide conclusive solutions for determining supplier responsibility for damage caused by autonomous vehicles.
Giovana Benetti
European AI Regulation Perspectives and Trends
Abstract
In a time when the AI Act Proposal is being discussed, it is important to revisit the meaning of AI regulation in light of the relationship between ethics and law. Then, the text analyses the options that were taken by the European Commission in the Proposal and, in particular, criticizes the relevant omission of civil liability. It argues that there is a need for convergence of compensation with the characteristics that, after all, justify the regulation of AI high-risk systems.
Henrique Sousa Antunes

Autonomous Systems, Attribution and Punishment

Frontmatter
The Basic Models of Criminal Liability of AI Systems and Outer Circles
Abstract
The way humans cope with breaches of legal order is through criminal law operated by the criminal justice system. Accordingly, human societies define criminal offenses and operate social mechanisms to apply them. This is how criminal law works. Originally, this way has been designed by humans and for humans. However, as technology has developed, criminal offenses are committed not only by humans. The major development in this issue has occurred in the seventeenth century. In the twenty-first century criminal law is required to supply adequate solutions for commission of criminal offenses through artificial intelligent (AI) systems. Basically, there are three basic models to cope with this phenomenon within the current definitions of criminal law. These models are:
(1)
The Perpetration-by-Another Liability Model;
 
(2)
The Natural Probable Consequence Liability Model; and
 
(3)
The Direct Liability Model.
 
Gabriel Hallevy
Punishing Artificial Intelligence: Legal Fiction or Science Fiction
Abstract
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This paper explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime.
Ryan Abbott, Alexander Sarch
Robots and Liability: New Criteria and Attribution Methods
Abstract
Many robots nowadays are termed “intelligent”, equipped as they are with AI (Artificial Intelligence) algorithms and able to perform complex activities and even make autonomous decisions. This technological and social change must be accompanied by legal repercussions for robot use and consequences when harm is caused. The European Commission has urged governments to prepare European citizens for the transformation that AI will bring about to ethical and legal issues. The law must not only ensure that these products are safe, but should also provide guarantees for consumers. For this reason, appropriate rules must be established for the attribution of liability when harm is caused. The main goal is to prepare Europe for the transformation that AI entails, with new norms to ensure its reliability. This chapter aims to analyse the criteria and methods for attributing legal liability regarding the use of robots that are equipped with AI. Firstly, the current regulations on contractual, non-contractual and consumer civil liability are discussed, together with ways in which they could be applied in such cases and their potentialities and drawbacks. EU legislative proposals in this area are then analyzed and the risks and harm generated by AI systems are studied.
Esther Monterroso Casado
Self-Driving Cars and Criminal Law
Abstract
This paper addresses the thought-provoking topic of the interaction between self-driving cars and Criminal Law. In this regard, I would like to consider the problem also known as tragic choices: Is it allowed to program a vehicle, in the imminent event of an accident, to choose between lives? And if not, should it be? After an introduction about the tragic choices dilemma (1., 2.), I will present some illustrative cases (3.). The description of a current state of the discussion, mostly in Germany, will reveal the as yet unresolved problems so we can provide an answer about the possibility of these vehicles, on the basis of certain criteria, to choose between some lives at the expense of others (4.). The presented difficulties suggest that perhaps it is appropriate to think in terms of a type of transitional rule.
Alaor Leite
Autonomous Systems and Wrongdoing: Revisiting the Meaning of Wrongdoing
Abstract
Doubts have been raised regarding the appropriate framework to address civil or criminal liability for damage or wrongful results caused by autonomous systems. These doubts present an opportunity to revisit several customary prerequisites for imposing liability, notably the significance of wrongdoing in general, which is a key element for both non-contractual liability and criminal liability. In dilemmatic situations (i.e. life versus life cases) arising from autonomous systems, many of the potential answers to ascertain potential criminal liability (of the producer, programmer, or operator) seem to depend on a clear distinction between unlawfulness and culpability. This paper reviews traditional and recent proposals to explain the concept of wrongdoing and discusses whether such proposals in general would help to resolve liability claims for damage or wrongful results caused by autonomous systems. Regarding specifically criminal liability, it is emphasized that Pawlick’s proposal of a citizen’s criminal law, despite its added value and the author’s intention to simplify the general theory of crime, nevertheless raises identical or greater difficulties than traditional conceptions of what is criminal wrongdoing in this connection. Therefore, the paper concludes that the answers to dilemmatic situations arising from autonomous systems must be found in other conceptions of criminal wrongdoing, as Pawlick’s proposal would fare no better (in this respect) than the traditional ones.
Rui Soares Pereira
Algorithmic Protection of the Core Area of Private Life. On the Deployment of Artificial Intelligence in Computer and Network Surveillance as a Duty of the State
Abstract
This article considers solutions to problems associated with State surveillance measures and interference into the privacy of the targeted individuals. Given that in-depth investigative measures to obtain digital evidence regularly encounter more than needed for the performance of State obligations, it argues for the deployment of artificial intelligence algorithms to protect fundamental rights during measures’ preformation. On the premise of the State’s duty to restrict intrusion into the individual’s privacy to a necessary minimum, which should take into account the current state of the art of the technology, it is proposed that the technology should already be used to pre-censor intimate or non-relevant data during the evidence’s seizure and to reduce serendipity (chance discoveries) in the criminal investigations.
Orlandino Gleizer
The Spread of Fake News by Social Bots: Perspectives on Social Bot Regulation
Abstract
Social bots are considered a key element in spreading fake news. However, there are no more than estimations regarding the actual dimension of their impacts on social media. This is due to their identification problem, i.e., the impossibility of detecting them. Since any regulation attempt must first overcome this lack of data, the present paper aims to analyze the different approaches to solving the social bots’ identification problem. To do so, it first defined social bots and, subsequently, discussed their threats. Later, it critically described the current solution proposals, dividing them into three groups: the technical approaches, the mandatory-label policy model, and the real-name policy model. After recognizing that only the last group is suitable to surpass the identification problem, the paper has shifted its focus to finding a milder and less authoritarian version of the real-name policy model, i.e., one that does not harshly affect anonymity. As a result, this paper has arrived at a here-called ‘dual-class’ solution, which proposes that the most prudent way for a real-name policy to avoid an authoritarian path is to create an environment of incentives for voluntary identity data disclosure on the internet.
Hugo Soares

Autonomous Systems and Decision-Making

Frontmatter
Judicial Power Without Judicial Responsibility: The Case Against Robot Judges
Abstract
Is it possible that in future we will have robot judges? And would this actually be permissible? The article answers these questions with a reluctant “yes” and a strict “no” respectively.
Luís Greco
Regulating Judge Artificial Intelligence (AI)
Abstract
There are many concerns that arise in the context of Judge AI where there is a complete replacement of judges and, to a far lesser extent, in the development of supportive Judge AI. These concerns are linked to micro as well as macro issues. For example, on an individual level, Judge AI might reduce cost and time factors but could also foster inaccurate and biased decision making or disadvantage vulnerable members of the community. At a macro level, the issues are more complex. Indeed, some early thinkers in the AI area considered that, of all groups within society, judges should not be replaced by AI. This concern is partly linked to the notion that AI might eventually take over the world and that human beings might be superseded by forms of AI. While Judge AI is in early developmental phases however, what considerations need to be made in building expert Judge AI systems and how can decision-making processes be algorithmically programmed for machine learning to accurately model areas of the law? The author grapples with these questions as well as ethical concerns regarding the automation of judging including the extent to which human judges should be retained and the legal areas suitable and unsuitable to Judge AI involvement. In addition, an ethical framework and accompanying principles for the use of Judge AI in the legal system is contemplated for the maintenance of the values and wellbeing of each jurisdiction which may incorporate Judge AI in their legal system.
Tania Sourdin
Artificial Intelligence, Probabilities and Evidence
Abstract
In this paper it is held that mathematical probabilities may be critical to apply artificial intelligence to legal evidence assessment. Indeed, by studying trial decisions, we become aware that judges often use the word “probability” to justify their conclusions about the facts. When we assume this way of talking has a degree of correspondence with the methods brains use to reason about evidence, then it follows that probability theorems may be a useful and efficient tool to represent knowledge processed as well as to deal with uncertainty in the trial context. For this purpose, a sketch will be drawn of how subjective interpretation of probabilities and Bayes’ theorem could work in the legal context. Finally, these ideas will be applied to a case where an explicit probabilistic assessment of evidence was put forward by the court.
João Marques Martins
Creative Machines—Machine Learning Models, Copyright, and Computational Creativity
Abstract
The use of machine learning models—also known as (a sub-area of) artificial intelligence—is not restricted to mere probability prediction and data analysis in statistical applications. Machine learning models are also used in generative—sometimes even creative contexts: Be it to produce short news articles, translate text or to come up with music specifically tailored to a certain scene in a movie. Or—as in the case of the infamous Edward de Belamy or the Next Rembrandt project, to “create” “works of art”. As technology advances, questions arise as to whether the produced “art works” could be copyright protected and, if so, who would be considered the author. The paper assumes that the creative process is heavily dependent upon (human) autonomous decision-making and that the degree to which decisions are “delegated” to a computer program heavily influences copyright protection of the output. It is also considered that, while machine learning models seemingly automate artwork generation, possibly rendering attribution to a human author difficult or unnecessary, machine learning models are the result of intellectual efforts and should rather be considered potent tools (which might in fact not hinder copyright protection) than autonomous machine artists.
Lisa Käde

Autonomous Systems and Contracts

Frontmatter
Blockchain(s), Smart Contracts and Intellectual Property
Abstract
The purpose of this chapter is to analyse to what extent the much talked about blockchain and smart contracts technology may contribute to a better Intellectual Property system. It starts with a brief explanation of the relationship between bitcoin, cryptography, blockchain and smart contracts. On this last point, as this is a legal study, special attention will be paid to the terminological problems of the term “smart contracts”. It outlines the potential applications of these technologies in the context of intellectual property rights. This study attempts to convey information on the current practical application in each of the points, considering the state of the art. The work will not be complete without a critical analysis of the real potential of this technology. In this context, we examine the arguments that have been put forward concerning the obstacles to the introduction of this technology within the scope of intellectual property. In particular, apart from the general problems that the introduction of this technology presents, there are some specificities of intellectual property rights that could hinder its exploitation.
Vítor Palmela Fidalgo
Algorithms, Creditworthiness, and Lending Decisions
Abstract
From a contract law perspective, this chapter addresses the implications of algorithmic creditworthiness assessments in credit agreements. On the assumption that (a) access to credit equals opportunity, (b) the current debate turns on the risks of algorithmic bias and discrimination on creditworthiness assessments, and (c) the much-defended “duty to explain” automated decisions under GDPR, this chapter delves into the multifaceted legal implications of algorithms use in the credit-scoring and in the lending decision-making processes. The chapter starts by challenging common perceptions about algorithmic decisions, focusing on the concept of «opacity» in decision-making processes. It outlines key aspects of opacity and its impact on creditworthiness assessment, highlighting the complexities of creditworthiness and the legal obligations of lenders, borrowers, and credit bureaus. Ultimately, the study concludes that the epistemic challenge posed by algorithm-based reasoning is less problematic than that of human decision-making. As a consequence, it can be argued that, under the purview of contract law, there is no valid justification to treat an algorithmic lending decision to deny or cut credit differently or more severely within the specific lender-borrower relationship, provided the same degree of opacity is present. Close attention is paid to algorithmic decisions considered discriminatory or unfair, as these decisions lie behind the current concerns about algorithmic accountability.
Ana Alves Leal
Blockchain, Currency and Systemic Issues
Abstract
Cryptocurrencies are on the rise, purporting to perform one or more monetary functions. In this article I seek to determine how far the legal framework that is assumed by consumers and companies when using conventional monetary objects is applicable to cryptocurrencies and to what extent the Regulation on Markets in Crypto-Assets is likely to address the specific monetary risks posed by these new phenomena.
Francisco Mendes Correia
CorpTech and Self-Driving Corporate Governance
Abstract
This paper studies the intersections between technology and corporate law, questioning whether Corporate technologies (CorpTech) will fundamentally reshape corporate governance. In this regard, the paper notes that the use of Artificial Intelligence (AI) in corporations raises twofold problems: AI used in a company’s governance and the governance of AI.
Madalena Perestrelo de Oliveira
Metadaten
Titel
Legal Aspects of Autonomous Systems
herausgegeben von
Dário Moura Vicente
Rui Soares Pereira
Ana Alves Leal
Copyright-Jahr
2024
Electronic ISBN
978-3-031-47946-5
Print ISBN
978-3-031-47945-8
DOI
https://doi.org/10.1007/978-3-031-47946-5