Skip to main content

2020 | Buch

Regulating Artificial Intelligence

herausgegeben von: Thomas Wischmeyer, Timo Rademacher

Verlag: Springer International Publishing

insite
SUCHEN

Über dieses Buch

This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers.

Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like.

The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality.

Inhaltsverzeichnis

Frontmatter
Artificial Intelligence as a Challenge for Law and Regulation
Abstract
The Introduction begins by providing examples of the fields in which AI is used, along with the varying impact that this has on society. It focuses on the challenges that AI poses when it comes to setting and applying law, particularly in relation to legal rules that seek to preserve the opportunities associated with AI while avoiding or at least minimising the potential risks. The law must aim to ensure good digital governance, both with respect to the development of algorithmic systems generally but also with respect to the use of AI specifically. Particularly formidable are the challenges associated with regulating the use of learning algorithms, such as in the case of machine learning. A great difficulty in this regard is ensuring transparency, accountability, responsibility, and the ability to make revisions, as well as preventing hidden discrimination. The Chapter explores the types of rules and regulations that are available. At the same time, it emphasises that it is not enough to trust that companies that use AI will adhere to ethical principles. Rather, supplementary legal rules are indispensable, including in the areas examined in the Chapter, which are mainly characterised by company self-regulation. The Chapter concludes by stressing the need for transnational agreements and institutions.
Wolfgang Hoffmann-Riem

Foundations of Artificial Intelligence Regulation

Frontmatter
Artificial Intelligence and the Fundamental Right to Data Protection: Opening the Door for Technological Innovation and Innovative Protection
Abstract
The use of AI—insofar as personal data are processed—poses a challenge to the current data protection law as the underlying concept of data protection law conflicts with AI in many ways. It is debateable whether this has to be the case from the perspective of fundamental rights. If the fundamental data protection right in Article 8 of the EU Charter of Fundamental Rights (CFR) recognised a right to informational self-determination, to make a personal decision on the use of one’s personal data, then the limitations of the legislator, at least with regard to the use of AI by public bodies would be strict—i.e. the use of AI would thus be largely prohibited in this regard. However, it seems to be more convincing to interpret Article 8 CFR as a duty of the legislator to regulate the handling of data by the state—and thus also the use of AI—in such a way that fundamental rights are protected as far as possible. A fundamental right to data protection interpreted in this way would be open to technical innovations, because it would enable the legislature to deviate in parts from the traditional basic concept of data protection law and instead to test innovative protective instruments that could even prove to be more effective. At the same time, it does not leave the individual unprotected, since it obliges the legislator, among other things, to base its regulations on a comprehensive concept for the protection of fundamental rights, which must also take account of data processing by private individuals.
Nikolaus Marsch
Artificial Intelligence and Autonomy: Self-Determination in the Age of Automated Systems
Abstract
The use of automated (decision-making) systems is becoming increasingly widespread in everyday life. By, for example, producing tailor-made decisions or individual suggestions, these systems increasingly penetrate—intentionally or unintentionally, openly or covertly—a sphere that has long been reserved for individual self-determination. With the advancing digitalisation of everyday life and the increasing proliferation of such systems, it can become more and more difficult for those affected to recognize the impact of these systems or to avoid their influence. This Chapter illustrates the risks that such systems may pose for individual self-determination and possible ways out.
Christian Ernst
Artificial Intelligence and Transparency: Opening the Black Box
Abstract
The alleged opacity of AI has become a major political issue over the past few years. Opening the black box, so it is argued, is indispensable to identify encroachments on user privacy, to detect biases and to prevent other potential harms. However, what is less clear is how the call for AI transparency can be translated into reasonable regulation. This Chapter argues that designing AI transparency regulation is less difficult than oftentimes assumed. Regulators profit from the fact that the legal system has already gained considerable experience with the question of how to shed light on partially opaque decision-making systems—human decisions. This experience provides lawyers with a realistic perspective of the functions of potential AI transparency legislation as well as with a set of legal instruments which can be employed to this end.
Thomas Wischmeyer
Artificial Intelligence and Discrimination: Discriminating Against Discriminatory Systems
Abstract
AI promises to provide fast, consistent, and rational assessments. Nevertheless, algorithmic decision-making, too, has proven to be potentially discriminatory. EU antidiscrimination law is equipped with an appropriate doctrinal tool kit to face this new phenomenon. This is particularly true in view of the legal recognition of indirect discriminations, which no longer require certain proofs of causality, but put the focus on conspicuous correlations, instead. As a result, antidiscrimination law highly depends on knowledge about vulnerable groups, both on a conceptual as well as on a factual level. This Chapter hence recommends a partial realignment of the law towards a paradigm of knowledge creation when being faced with potentially discriminatory AI.
Alexander Tischbirek
Artificial Intelligence and Legal Personality: Introducing “Teilrechtsfähigkeit”: A Partial Legal Status Made in Germany
Abstract
What exactly are intelligent agents in legal terms? Are we just looking at sophisticated objects? Or should such systems be treated as legal persons, somewhat similar to humans? In this article I will argue in favor of a ‘halfway’ or ‘in-between’ status that the German civil law has to offer: Teilrechtsfähigkeit, a status of partial legal subjectivity based on certain legal capabilities. When applied, intelligent agents would be treated as legal subjects as far as this status followed their function as sophisticated servants. This would both deflect the ‘autonomy risk’ and fill most of the ‘responsibility gaps’ without the negative side effects of full legal personhood. However, taking into consideration the example of animals, it is unlikely that courts will recognize Teilrechtsfähigkeit for intelligent agents on their own. This calls for the lawmaker to come up with a slight push, which I call the ‘reversed animal rule’: It should be made clear by statute that intelligent agents are not persons, yet that they can still bear certain legal capabilities consistent with their serving function.
Jan-Erik Schirmer

Governance of and Through Artificial Intelligence

Frontmatter
Artificial Intelligence and Social Media
Abstract
This article examines the legal questions and problems raised by the increasing use of artificial intelligence tools on social media services, in particular from the perspective of the regulations specifically governing (electronic) media. For this purpose, the main characteristics of social media services are described, and the typical forms of AI applications on social media services are briefly categorized. The analysis of the legal framework starts with the introduction of ‘protective’ and ‘facilitative’ media regulation as the two basic concepts and functions of media law in general and of the law governing information society services in particular. Against this background, the major legal challenges associated with the use of AI on social media services for both protective and facilitative media regulation are presented. With respect to protective media regulation, these challenges include the fundamental rights protection of AI-based communication on social media services, legal options to restrict such forms of communication and the responsibilities of social media providers in view of unwanted content and unwanted blocking of content. As a major objective of facilitative regulation of social media AI, the regulatory handling of potential bias effects of AI-based content filtering on social media users is discussed, including phenomena commonly referred to as ‘filter bubble’ and ‘echo chamber’ effects.
Christoph Krönke
Artificial Intelligence and Legal Tech: Challenges to the Rule of Law
Abstract
Artificial intelligence is shaping our social lives. It is also affecting the process of law-making and the application of law—coined by the term ‘legal tech’. Accordingly, law-as-we-know-it is about to change beyond recognition. Basic tenets of the law, such as accountability, fairness, non-discrimination, autonomy, due process and—above all—the rule of law are at risk. However, so far, little has been said about regulating legal tech, for which there is obviously considerable demand. In this article, it is suggested that we reinvent the rule of law and graft it onto technology by developing the right standards, setting the right defaults and translating fundamental legal principles into hardware and software. In short, ‘legal protection by design’ is needed and its implementation must be required by law—attributing liability where necessary. This would reconcile legal tech with the rule of law.
Gabriele Buchholtz
Artificial Intelligence and Administrative Decisions Under Uncertainty
Abstract
How should artificial intelligence guide administrative decisions under risk and uncertainty? I argue that artificial intelligence, specifically machine learning, lifts the veil covering many of the biases and cognitive errors engrained in administrative decisions. Machine learning has the potential to make administrative agencies smarter, fairer and more effective. However, this potential can only be exploited if administrative law addresses the implicit normative choices made in the design of machine learning algorithms. These choices pertain to the generalizability of machine-based outcomes, counterfactual reasoning, error weighting, the proportionality principle, the risk of gaming and decisions under complex constraints.
Yoan Hermstrüwer
Artificial Intelligence and Law Enforcement
Abstract
Artificial intelligence is increasingly able to autonomously detect suspicious activities (‘smart’ law enforcement). In certain domains, technology already fulfills the task of detecting suspicious activities better than human police officers ever could. In such areas, i.e. if and where smart law enforcement technologies actually work well enough, legislators and law enforcement agencies should consider their use. Unfortunately, the German Constitutional Court, the European Court of Justice, and the US Supreme Court are all struggling to develop convincing and clear-cut guidelines to direct these legislative and administrative considerations. This article attempts to offer such guidance: First, lawmakers need to implement regulatory provisions in order to maintain human accountability if AI-based law enforcement technologies are to be used. Secondly, AI law enforcement should be used, if and where possible, to overcome discriminatory traits in human policing that have plagued some jurisdictions for decades. Finally, given that smart law enforcement promises an ever more effective and even ubiquitous enforcement of the law—a ‘perfect’ rule of law, in that sense—it invites us as democratic societies to decide if, where, and when we might wish to preserve the freedom to disobey the rule(s) of law.
Timo Rademacher
Artificial Intelligence and the Financial Markets: Business as Usual?
Abstract
AI and financial markets go well together. The promise of speedy calculations, massive data processing and accurate predictions are too tempting to pass up for an industry in which almost all actors proceed exclusively instructed by a profit maximising logic. Hence, the strong mathematical prerequisites of financial decision-making give rise to the question: Why do financial markets require a human element anyway? The question is largely of a rhetorical nature due to the lack of complexity of most current AI tools. However, AI tools have been used in finance since the early 1990s and the push to overcome faulty computing and other shortcomings has been palpable ever since. Digitalization has amplified efforts and possibilities. Institutions with business models based on AI are entering the market by the hundreds; banks and insurers are either spinning off their AI expertise to foster its growth or paying billions to acquire expertise. There is no way around AI—at least in certain parts of the financial markets. This article outlines the developments concerning the application of AI in the financial markets and discusses the difficulties pertaining to its sudden rise. It illustrates the diverse fields of application (Sect. 1) and delineates approaches, which major financial regulators are taking towards AI (Sect. 2). In a next step governance through and of AI is discussed (Sect. 3). The article concludes with the main problems that a reluctant approach towards AI results in (Sect. 4).
Jakob Schemmel
Artificial Intelligence and Public Governance: Normative Guidelines for Artificial Intelligence in Government and Public Administration
Abstract
This chapter discusses normative guidelines for the use of artificial intelligence in Germany against the backdrop of international debates. Artificial intelligence (AI) is increasingly changing our lives and our social coexistence. AI is a research question and a field of research producing an ever-increasing number of technologies. It is set of technologies that are still evolving. These are driven and influenced by guidelines in the form of laws or strategies. This chapter examines AI systems in public administration and raises the question of what guidelines already exist and what trends are emerging. After defining AI and providing some examples from government and administration, identify ethics and politics as possible points of reference for guidelines. This chapter presents the law, technology, organization, strategy and visions as possible ways to influence and govern AI along with describing current developments. The chapter concludes with a call for interdisciplinary research and moderate regulation of technology in order to enhance its positive potential.
Christian Djeffal
Artificial Intelligence and Taxation: Risk Management in Fully Automated Taxation Procedures
Abstract
On January 1, 2017, the Taxation Modernization Act entered into force in Germany. It includes regulations on fully automated taxation procedures. In order to uphold the principle of investigation that characterizes German administrative law, a risk management system can be established by the tax authorities. The risk management system aims to detect risk-fraught cases in order to prevent tax evasion. Cases identified as risk-fraught by the system need to be checked manually by the responsible tax official. Although the technical details of risk management systems are kept secret, such systems are presumably based on artificial intelligence. If this is true, and especially if machine learning techniques are involved, this could lead to legally relevant problems. Examples from outside tax law show that fundamental errors may occur in AI-based risk assessments. Accordingly, the greatest challenge of using artificial intelligence in risk management systems is its control.
Nadja Braun Binder
Artificial Intelligence and Healthcare: Products and Procedures
Abstract
This paper focuses on statutory regulation of learning machines that qualify as medical devices. After a brief case study, the article takes a procedural perspective and presents the main features of the European regulatory framework that applies to medical devices in order to identify the regulatory peculiarities in the use of machine learning. In this context, the Chapter will analyse the immanent risks of machine learning applications as medical devices as well as the role of machine learning in their regulation. The overall finding is that due to its lack of expertise and material equipment the state activates private companies for market access control, which are commissioned with the preventive inspection of medical devices. As a result, security measures adopted by the authority are in principle limited to the period after market-entry. This leads to a structural information deficit for the authority, which has no systematic information about the products on the market. The authority is limited to a challenging overall market observation. This raises the question addressed in the fifth part of the paper: does the law guarantee sufficient instruments for the systematic transfer of knowledge from the risk actors to the authority about the potential risk of medical devices and does this in fact remedy the information deficit of the authority and ensure an effective post market-entry control of learning machines as medical devices?
Sarah Jabri
Artificial Intelligence in Healthcare: Doctors, Patients and Liabilities
Abstract
AI is increasingly finding its way into medical research and everyday healthcare. However, the clear benefits offered to patients are accompanied not only by general limitations typical of the application of AI systems but also by challenges that specifically characterize the operationalization of the concepts of disease and health. Traditionally, these challenges have been dealt with in the physician-patient relationship in both medical ethics and civil law. The potential for incorrect decisions (and the question of who is responsible for such decisions) in cases where AI is used in a medical context calls for a differentiated implementation of medical ethical principles and a graduated model of liability law. Nevertheless, on closer examination of both fields covering relevant obligations towards patients and users against the backdrop of current medical use cases of AI, it seems that despite a certain level of differentiation in the assignment of responsibilities through rules on liability, those affected, in the end, are generally left to deal with any AI-specific risks and damages on their own. The role played by the physician in all this remains unclear. Taking into account the physician-patient relationship as a contractual obligation in a broad sense can assist in clarifying physicians’ roles and determining their duties in a sustainable and patient-friendly manner when applying AI-based medical systems. This can contribute to reinforcing their established ethical and legal status in the context of AI applications.
Fruzsina Molnár-Gábor
Artificial Intelligence and Competition Law
Abstract
Artificial Intelligence (AI) is ‘in the air’. The disruptive technologies AI is based on (as well as respective applications) are likely to influence the competition on and for various markets in due course. The handling of opportunities and threats re AI are so far still an open question—and research on the competitive effects of AI has just commenced recently. Statements about AI and the corresponding effects are thereby necessarily only of a temporary nature. From a jurisprudential point of view, it is however important to underline (not only) the framework for AI provided by competition law. On the basis of the 9th amendment of the German Act Against Restraints of Competition (ARC) 2017, German competition law seems to be—to a large extent—adequately prepared for the phenomenon of AI. Nevertheless, considering the characteristics of AI described in this paper, at least the interpretation of German (and European) competition law rules requires an ‘update’. In particular, tacit collusion as well as systematic predispositions of AI applications re market abuse and cartelization analyzed in this paper are to be pictured. Additionally, this paper stresses that further amendments to (European and German) competition law rules should be examined with respect to the liability for AI and law enforcement, whereby the respective effects on innovation and the market themselves will have to be considered carefully. Against this background, this paper argues that strict liability for AI might lead to negative effects on innovation and discusses a limited liability re public sanctions in analogy to intermediary liability concepts developed in tort law, unfair competition law and intellectual property law. Addressing the topic of a ‘legal personality’ for AI-based autonomous systems, this paper finally engages with the consequences of such a status for competition law liability.
Moritz Hennemann
Metadaten
Titel
Regulating Artificial Intelligence
herausgegeben von
Thomas Wischmeyer
Timo Rademacher
Copyright-Jahr
2020
Verlag
Springer International Publishing
Electronic ISBN
978-3-030-32361-5
Print ISBN
978-3-030-32360-8
DOI
https://doi.org/10.1007/978-3-030-32361-5