Skip to main content

2021 | Buch | 1. Auflage

Algorithmic Governance and Governance of Algorithms

Legal and Ethical Challenges

herausgegeben von: Martin Ebers, Marta Cantero Gamito

Verlag: Springer International Publishing

Buchreihe : Data Science, Machine Intelligence, and Law

insite
SUCHEN

Über dieses Buch

Algorithms are now widely employed to make decisions that have increasingly far-reaching impacts on individuals and society as a whole (“algorithmic governance”), which could potentially lead to manipulation, biases, censorship, social discrimination, violations of privacy, property rights, and more. This has sparked a global debate on how to regulate AI and robotics (“governance of algorithms”). This book discusses both of these key aspects: the impact of algorithms, and the possibilities for future regulation.

Inhaltsverzeichnis

Frontmatter
Algorithmic Governance and Governance of Algorithms: An Introduction
Abstract
The use of algorithms is more than ever replacing human decision-making. Naturally, this raises concerns about how to govern AI-powered technologies. This chapter introduces the potential as well as the threat(s) posed by decision-making by algorithms (algorithmic governance) and provides an up-to-date overview of the state of art and the existing legislative initiatives in this field (governance of algorithms).
Marta Cantero Gamito, Martin Ebers
Privacy, Non-Discrimination and Equal Treatment: Developing a Fundamental Rights Response to Behavioural Profiling
Abstract
In the diverse attempts to identify fundamental rights implications of behavioural profiling, the lines between the right to privacy, non-discrimination and equal treatment have been blurred. Scholars have struggled to develop coherent approaches to the widespread practice of evaluating and differentiating between individuals on the basis of correlative relations between random, causally unrelated categories in large data sets. This chapter suggests a response to these practices and establishes clear boundaries between the rights. It is argued that the right to non-discrimination should be interpreted narrowly, thus not applying to large parts of behavioural profiling. Extending its scope to random categories would jeopardise the distinguished quality of the right to effectively prohibit the most appalling and morally reprehensible differentiations. The scope of the right to privacy, conversely, has an open-ended structure and can be further evolved in order to respond to novel threats. However, it is first and foremost the right to equal treatment, which carries great, even though little acknowledged potential to provide a normative framework for behavioural profiling. The engagement with it can encourage, frame and respond to a much needed societal debate on machine learning based data analysis. Difficulties which occur in the application of the right to equal treatment are manifestations of a larger societal challenge. The shared question, which both, fundamental rights lawyers and society at large have to answer is how to accommodate the new way of generating knowledge and differentiating between individuals within our conventional way to understand the world, to reason and to differentiate.
Niklas Eder
The Black Box on Trial: The Impact of Algorithmic Opacity on Fair Trial Rights in Criminal Proceedings
Abstract
Algorithms are increasingly used in criminal proceedings for evidentiary purposes (e.g. GPS position, for DNA analysis or as means of obtaining digital evidence) and for supporting decision-making (e.g. software for assessing the risk of recidivism of individuals). In a worrying trend, these tools are still concealed in secrecy and opacity preventing the possibility to understand how their specific output has been generated. This chapter focuses on the legal challenges triggered by algorithmic opacity in criminal proceedings. The chapter argues that algorithms may contain miscodes, opacity in algorithms keeps such miscodes hidden, and, as a result, algorithmic opacity impacts fair trial rights.
Francesca Palmiotto
Microchipping Employees: Unlawful Monitoring Practice or a New Trend in the Workplace?
Abstract
A specific technology has been making its way into a growing number of workplaces. In addition to computer screen recording, video surveillance, keystroke monitoring, location tracking and social media monitoring, employers can now use microchips as an additional tool to monitor employee activity. These microchips give new abilities to employees and replace credit cards, keys, passwords and bracelets. Indeed, microchips allow employees to automatically open doors, trigger computers or printers and pay for purchases. Despite these useful qualities, microchips may bring forth several possible legal concerns, such as the possibility to covert and constant surveillance, profiling and digital discrimination. As this innovative technology appears destined to further challenge regulatory frameworks, this chapter aims to provide answers to the following research questions: is GDPR applicable in case of microchipping? If GDPR is applicable in these situations, then under what legislative grounds can microchipping be lawfully accommodated? Also, as each EU Member State could provide more specific rules under the GDPR to safeguard the employees’ right of personal data, the chapter analyses whether more specific national rules are needed in order to protect microchipped employees.
Seili Suder, Merle Erikson
Electronic Personhood: A Tertium Genus for Smart Autonomous Surgical Robots?
Abstract
Back in 2016, the Committee on Legal Affairs of the European Parliament published a pioneering initiative, the Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (the “Resolution”, or the “EP proposed rules”). The EP proposed rules were intended to bring a common EU solution to the legal challenges posed by, amongst others, smart autonomous robots. This chapter scrutinizes the applicability of one of the solutions proposed by the EP proposed rules: the granting of electronic personhood to such robots by critically evaluating the existence of a legal basis for it.
Tomás Gabriel García-Micó
Online Behavioural Advertising and Unfair Manipulation Between the GDPR and the UCPD
Abstract
Online behavioural advertising, i.e. the practice of targeting digital advertising according to consumers’ online behaviour, is posing threats to consumers’ autonomy. This is happening not only because behavioural advertising intrinsically has the effect of restricting options for consumers, but also and especially because it increasingly enables customisation of advertising messages according to psychological traits and vulnerabilities of consumers. Against this background, the present work evaluates if, how and to what extent the EU General Data Protection Regulation EU/2016/697 (GDPR) and the Directive 2005/29/EC on unfair commercial practices (UCPD) contribute to safeguarding autonomous decision-making of individuals in front of new data and technology-driven subtle forms of manipulative advertising.
Federico Galli
Protecting Deep Learning: Could the New EU-Trade Secrets Directive Be an Option for the Legal Protection of Artificial Neural Networks?
Abstract
Deep learning based on artificial neural networks is currently the most promising machine learning method in the field of AI. This paper distinguishes four legal protection objects of artificial neural networks per se: the training data, the topology, the weights as an expression of the trained network, and the specific training method. Both archetypical intellectual property (IP) rights, copyright and patent law, fall to some extent short of protecting these objects. This article examines whether and to what extent trade secret protection could be a suitable or supplementary legal protection tool. Trade secret protection is, among other advantages, flexible. Its greatest weakness, however, is that it allows for reverse engineering which in turn limits its application as a legal protection tool. In the case of an adaptation, trade secret law could at least temporarily supplement patent law and partially replace the classical anthropocentric copyright law in the field of deep learning.
Jasper Siems
Chinese Copyright Law and Computer-Generated Works in the Era of Artificial Intelligence
Abstract
Artificial intelligence (AI) technology has been widely applied by innovative industries with a large number of outputs indistinguishable from human works. These outputs raise debates on the legal status of computer-generated works (CGWs) under copyright law, questions which are directly related to certain key issues of copyright law, including copyrightability and authorship. This article, based on the analysis of China’s copyright system, raises the question about whether the judgment criterion of the originality of computer-generated works should take into account the author’s personality and creativity. It argues that some existing specific rules of Chinese copyright law, such as “the work created in the course of employment” and “the work of legal person or entity”, allow for the separation of the authorship and ownership. Under the basic principle of “the author is the owner of copyright”, these rules value the importance of investors, which provides a possible approach to the protection of CGWs by the current copyright law in the future.
Ying Ye, Mike Adcock
Metadaten
Titel
Algorithmic Governance and Governance of Algorithms
herausgegeben von
Martin Ebers
Marta Cantero Gamito
Copyright-Jahr
2021
Verlag
Springer International Publishing
Electronic ISBN
978-3-030-50559-2
Print ISBN
978-3-030-50558-5
DOI
https://doi.org/10.1007/978-3-030-50559-2