Skip to main content
Top

Open Access 2024 | Open Access | Book

Cover of the book

Multidisciplinary Perspectives on Artificial Intelligence and the Law

Editors: Henrique Sousa Antunes, Pedro Miguel Freitas, Arlindo L. Oliveira, Clara Martins Pereira, Elsa Vaz de Sequeira, Luís Barreto Xavier

Publisher: Springer International Publishing

Book Series : Law, Governance and Technology Series

insite
SEARCH

About this book

This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI.As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.

Table of Contents

Frontmatter

Scientific, Technological and Societal Achievements in Artificial Intelligence

Frontmatter

Open Access

Artificial Intelligence: Historical Context and State of the Art
Abstract
The idea that intelligence is the result of a computational process and can, therefore, be automated, is centuries old. We review the historical origins of the idea that machines can be intelligent, and the most significant contributions made by Thomas Hobbes, Charles Babbage, Ada Lovelace, Alan Turing, Norbert Wiener, and others. Objections to the idea that machines can become intelligent have been raised and addressed many times, and we provide a brief survey of the arguments and counter-arguments presented over time. Intelligence was first viewed as symbol manipulation, leading to approaches that had some successes in specific problems, but did not generalize well to real-world problems. To address the difficulties faced by the early systems, which were brittle and unable to handle unforeseen complexities, machine learning techniques were increasingly adopted. Recently, a sub-field of machine learning known as deep learning has led to the design of systems that can successfully learn to address difficult problems in natural language processing, vision, and (yet to a lesser extent) interaction with the real world. These systems have found applications in countless domains and are one of the central technologies behind the fourth industrial revolution, also known as Industry 4.0. Applications in analytics enable artificial intelligence systems to exploit and extract economic value from data and are the main source of income for many of today’s largest companies. Artificial intelligence can also be used in automation, enabling robots and computers to replace humans in many tasks. We conclude by providing some pointers to possible future developments, including the possibility of the development of artificial general intelligence, and provide leads to the potential implications of this technology in the future of humanity.
Arlindo L. Oliveira, Mário A. T. Figueiredo

Open Access

The Impact of Language Technologies in the Legal Domain
Abstract
In the current digital era, language technologies are playing an increasingly vital role in the legal domain, assisting users, lawyers, judges, and legal professionals to solve many real-world problems. While open datasets and innovative deep learning methodologies have led to recent breakthroughs in the area, significant efforts are still being made to transfer the theoretical/algorithmic developments, associated with general text and speech processing, into real applications in the legal-domain. This chapter presents a brief survey on language technologies for addressing legal tasks, covering studies and applications related to both text and speech processing (Manuscript submitted in May 2022).
Isabel Trancoso, Nuno Mamede, Bruno Martins, H. Sofia Pinto, Ricardo Ribeiro

Open Access

Societal Implications of Recommendation Systems: A Technical Perspective
Abstract
One of the most popular applications of artificial intelligence algorithms is in recommendation systems (RS). These take advantage of large amounts of user data to learn from the past to help us identify patterns, segment user profiles, predict users’ behaviors and preferences. The algorithmic architecture of RS has been so successful that it has been co-opted in many contexts, from human resources teams, trying to select top candidates, to medical researchers, wanting to identify drug targets. Although the increasing use of AI can provide great benefits, it represents a shift in our interaction with data and machines that also entails fundamental social threats. These can derive from technological or implementation mistakes but also from profound changes in decision-making.
Here, we overview some of those risks including ethical and privacy challenges from a technical perspective. We discuss two particularly relevant cases: (1) RS that fail to work as intended and its possible unwanted consequences; (2) RS that work but at the possible expense of threats to individuals and even to democratic societies. Finally, we propose a way forward through a simple checklist that can be used to improve the transparency and accountability of AI algorithms.
Joana Gonçalves-Sá, Flávio Pinheiro

Open Access

Data-Driven Approaches in Healthcare: Challenges and Emerging Trends
Abstract
Data is dominating and revolutionizing the healthcare industry in unprecedented ways. Associated with the new technologies of artificial intelligence, they promise to create the foundations for a new paradigm of medicine focused on the individuality of each person. This chapter is divided into four sections that aim to introduce the reader to the topic of data-driven approaches in the health sector. In section one, three ideologies are presented that, despite having some overlaps, present different views on how data should be used in order to guarantee a health service centered on each individual. In section two, the data-driven concept is explored. The emerging challenges of processing large volumes of data and their impacts on individuals, institutions, and society are associated with innovation in other disciplines such as artificial intelligence and personalized medicine. Since artificial intelligence is becoming a disruptive technology in the health sector, section three is dedicated to addressing the ethics and legal challenges posed by this new technological advance. To conclude, section four describes how the healthcare industry has become a major proving ground for artificial intelligence applications, with both startups and venture capital investors recognizing the enormous potential this technology can offer.
Ana Teresa Freitas

Open Access

Security and Privacy
Abstract
Computer security or cybersecurity is concerned with the proper functioning of computer systems despite the actions of adversaries. Privacy is about a person or group ability to control how, when, and to what extent their personally identifiable information is shared. The chapter starts by defining security and privacy and explaining why they are problems. Then, it presents some of the scientific and technological achievements in the two areas, highlighting some research trends. Afterwards, the chapter relates security and privacy to the main topics of the book: machine learning as part of artificial intelligence. Finally, the chapter illustrates the relevance of ML in the area using censorship resistance as an example.
Miguel Correia, Luís Rodrigues

Ethical and Legal Challenges in Artificial Intelligence

Frontmatter

Open Access

Before and Beyond Artificial Intelligence: Opportunities and Challenges
Abstract
Artificial intelligence (AI) and digital systems are currently occupying a fundamental place throughout society. They are devices that shape human life and induce significant civilizational changes. Given their huge power, namely systems with autonomous decision-making capacity, it is natural that the potential social effects deserve a critical reflection on the opportunities and challenges addressed by AI. This is the main goal of this text. The authors begin by explaining the philosophical position from which they start, and which contextualizes their reflection on technological innovation in general, then briefly considering the genealogy (“before”) of AI, in its main characteristics and direction of evolution (“Can machines imitate humans?”). It is considering the path of development of AI and its disruptive effects on human life (“beyond”) that it is proposed its systematization in three categories—functional, structural, identity—(“Can humans imitate machines?”).
Regardless of the optimistic or pessimist expectations towards technological evolution, there is a need for a public debate about its current and future regulation. The text also identifies major ethical principles and legal requirements to regulate AI in order to protect fundamental human rights.
M. Patrão Neves, A. Betâmio de Almeida

Open Access

Autonomous and Intelligent Robots: Social, Legal and Ethical Issues
Abstract
The word “robot” was used for the first time in 1921 by the Czech writer Karel Čapek, who wrote a play called R.U.R. (“Rosumovi Univerzální Roboti”), featuring a scientist who develops a synthetic organic matter to make “humanoid autonomous machines”, called “robots”. These so called “robots” were supposed to act as slaves and obediently work for humans. Over the years, as real “robots” actually began to be built, their impact on our lives, our work and our society, has brought many benefits, but also raised some concerns. This paper discusses some of the areas of robotics, its advances, challenges and current limitations. We then discuss not only how robots and automation can contribute to our society, but also raise some of the social, legal and ethical concerns that robotics and automation can bring.
Pedro U. Lima, Ana Paiva

Open Access

The Ethical and Legal Challenges of Recommender Systems Driven by Artificial Intelligence
Abstract
In a hyperconnected world, recommendation systems (RS) are one of the most widespread commercial applications of artificial intelligence (AI), initially mostly used for e-commerce, but already widely applied to different areas, for instance, content providers and social media platforms. Due to the current information overload, these systems are designed mainly to help individuals dealing with the infinity of options available, in addition to optimizing companies’ profits by offering products and services that directly meet the needs of their customers. However, despite its benefits, RS based on AI may also create detrimental effects—sometimes unforeseen—for users and society, especially for vulnerable groups. Constant tracking of users, automated analysis of personal data to predict and infer behaviours, preferences, future actions and characteristic, the creation of behavioural profiles and the microtargeting for personalized recommendations may raise relevant ethical and legal issues, such as discriminatory outcomes, lack of transparency and explanation of algorithmic decisions that impact people’s lives and unfair violations of privacy and data protection. This article aims to address these issues, through a multisectoral, multidisciplinary and human rights’-based approach, including contributions from the Law, ethics, technology, market, and society.
Eduardo Magrani, Paula Guedes Fernandes da Silva

Open Access

Metacognition, Accountability and Legal Personhood of AI
Abstract
One of the puzzles yet to be solved regarding Artificial Intelligence (AI) is whether or not robots can be considered accountable and have, eventually, legal personhood. With inputs from Philosophy, Psychology, Computation and Law, the paper proposes an interdisciplinary approach to the question of legal personhood in AI. In this paper, we examine, firstly, the concepts of Object (a mere tool) and Agent, in order to understand in which category AI may belong to. Secondly, we analyze how Metacognition, broadly defined as the cognition about cognition, which results in mental processes that control an entity’s thoughts and behavior, can be applied to law as a minimum requirement for accountability. For instance, we shall see that both children and people with mental diseases, besides being two categories of subjects that have a very restricted legal capacity, also show some limitations when it comes to Metacognition. In other words, we argue that the main difference between a non-responsible and a responsible Agent depends on the metacognitive processes that can be carried out by the entity. Ultimately, we discuss how to transpose this idea to AI, debating the possible terms of legal personhood of AI.
Beatriz A. Ribeiro, Helder Coelho, Ana Elisabete Ferreira, João Branquinho

Open Access

Artificial Intelligence and Decision Making in Health: Risks and Opportunities
Abstract
The use of systems that include Artificial Intelligence (AI) imposes an assessment of the risks and opportunities associated with their incorporation in the health area. Different types of AI present multiple ethical, legal and social challenges. AI systems involved incorporated with new imaging and signal processing technologies. AI systems in the area of communication have made it possible to carry out previously non-existent interactions and facilitate access to data and information. The greatest concern involves the areas of planning, knowledge and reasoning, as AI systems are directly associated with the decision-making process. So, the central objective of this chapter is to reflect and suggest recommendations, with the foundation of the Complex Bioethics Model, about the decision-making process in health with AI support, considering risks and opportunities. The chapter is organized in two parts: (1) The decision-making processes in health and AI; (1.1) The health area the use of AI and decision-making processes: opportunities and risks to treat electronic health records (EHR) and (2) Complex Bioethics Model (CBM) and AI.
Márcia Santana Fernandes, José Roberto Goldim

Open Access

The Autonomous AI Physician: Medical Ethics and Legal Liability
Abstract
Artificial intelligence (AI) is currently capable of autonomously performing acts that constitute medical practice, including diagnosis, prognosis, therapeutic decision making, and image analysis, but should AI be considered a medical practitioner? Complicating this question is that fact that the ethical, regulatory, and legal regimes that govern medical practice and medical malpractice are not designed for nonhuman doctors. This chapter first suggests ethical parameters for the Autonomous AI Physician’s practice of medicine, focusing on the field of pathology. Second, we identify ethical and legal issues that arise from the Autonomous AI Physician’s practice of medicine, including safety, reliability, transparency, fairness, and accountability. Third, we discuss the potential application of various existing legal and regulatory regimes to govern the Autonomous AI Physician. Finally, we conclude that all stakeholders in the development and use of the Autonomous AI Physician have an obligation to ensure that AI is implemented in a safe and responsible way.
Mindy Nunez Duffourc, Dominick S. Giovanniello

Open Access

Ethical Challenges of Artificial Intelligence in Medicine and the Triple Semantic Dimensions of Algorithmic Opacity with Its Repercussions to Patient Consent and Medical Liability
Abstract
Artificial intelligence algorithms have the potential to diagnose some types of skin cancer or to identify specific heart-rhythm abnormalities as well as (or even better) than board-certified dermatologists and cardiologists. However, one of the biggest fears in the healthcare sector in the Era of AI in Medicine is the so-called black box medicine, given the obscurity in the way information is processed by algorithms. More broadly, it is observed that there are three different semantic dimensions of algorithmic opacity relevant to Medicine: (1) epistemic opacity for the insufficient physicians understanding of the rules an AI system is applying to make predictions and decisions; (2) opacity for the lack of medical disclosure about the AI systems to support clinical decisions and patient’s unawareness that automated decision-making are being carried out with their personal data; (3) explanatory opacity for the unsatisfactory explanation to patients about the technology used to support professional decision-making. Therefore, the aim of this study is to analyze each type of opacity, considering hypothetical scenarios and its repercussions in terms of medical malpractice and patient’s informed consent. From this, it will be defined ethical challenges of using AI in the healthcare sector and the importance of medical education.
Rafaella Nogaroli, José Luiz de Moura Faleiros Júnior

The Law, Governance and Regulation of Artificial Intelligence

Frontmatter

Open Access

Dismantling Four Myths in AI & EU Law Through Legal Information ‘About’ Reality
Abstract
The European Commission has recently proposed several acts, directives and regulations that shall complement today’s legislation on the internet, data governance, and Artificial Intelligence, e.g., the AI Act from May 2021. Some have proposed to sum up current trends of EU law according to catchy formulas, such as (i) digital sovereignty; (ii) digital constitutionalism; (iii) a new Brussels effect; and, (iv) a human-centric approach to AI. Each of these narratives has its merits, but can be highly misleading. They must be taken with four pinches of salt. The aim of this paper is to dismantle these ‘myths’ through legal information ‘about’ reality, that is, knowledge and concepts that frame the representation and function of EU law. We should be attentive to that which current myths overlook, such as the open issues on the balance of power between EU institutions and member states (MS), a new generation of digital rights at both EU and MS constitutional levels, down to the interplay between new models of legal governance and the potential fragmentation of the system, e.g., between technological regulations and environmental law.
Ugo Pagallo

Open Access

AI Modelling of Counterfactual Thinking for Judicial Reasoning and Governance of Law
Abstract
When speaking of moral judgment, we refer to a function of recognizing appropriate or condemnable actions and the possibility of choice between them by agents. Their ability to construct possible causal sequences enables them to devise alternatives in which choosing one implies setting aside others. This internal deliberation requires a cognitive ability, namely that of constructing counterfactual arguments. These serve not just to analyse possible futures, being prospective, but also to analyse past situations, by imagining the gains or losses resulting from alternatives to the actions actually carried out, given evaluative information subsequently known.
Counterfactual thinking is in thus a prerequisite for AI agents concerned with Law cases, in order to pass judgement and, additionally, for evaluation of the ongoing governance of such AI agents. Moreover, given the wide cognitive empowerment of counterfactual reasoning in the human individual, namely in making judgments, the question arises of how the presence of individuals with this ability can improve cooperation and consensus in populations of otherwise self-regarding individuals.
Our results, using Evolutionary Game Theory (EGT), suggest that counterfactual thinking fosters coordination in collective action problems occurring in large populations and has limited impact on cooperation dilemmas in which such coordination is not required.
Luís Moniz Pereira, Francisco C. Santos, António Barata Lopes

Open Access

Judicial Decision-Making in the Age of Artificial Intelligence
Abstract
Artificial intelligence (AI) has become a pervasive presence in almost every aspect of society and business: from assigning credit scores to people, to identifying the best candidates for an employment position, to ranking applicants for admission to university. One of the most striking innovations in the United States criminal justice system in the last three decades has been the introduction of risk-assessment software, powered by sophisticated algorithms, to predict whether individual offenders are likely to re-offend. The focus of this contribution is on the use of these risk-assessment tools in criminal sentencing. Apart from the broader social, ethical and legal considerations, to date, not much is known about how perceptions of technology influence cognition in decision-making, particularly in the legal context. What research does demonstrate is that humans are inclined to trust algorithms as objective, and, as such, as unobjectionable. This contribution examines two phenomena in this regard: (i) the “technology effect”—the human tendency towards excessive optimism when making decisions involving technology; and (ii) “automation bias”—the phenomenon whereby judges accept the recommendations of an automated decision-making system, and cease searching for confirmatory evidence, perhaps even transferring responsibility for decision-making onto the machine.
Willem H. Gravett

Open Access

Liability for AI Driven Systems
Abstract
This article tries to assess if the current civil liability regimes provide a sound framework to tackle damages when AI systems—especially those based on machine-learning—are involved. We try to find answers for three questions: is there a place for fault-based liability, when it is impossible to ascertain, among multiple actors, whose action caused the damage? Are current strict liability regimes appropriate to address no-fault damages caused by the functioning of AI-systems or a new system is needed? When should an agent be exempted from liability? This analysis takes into consideration the important work produced within the European Union, especially the 2019 Report on “Liability for AI and Other Emerging Digital Technologies” (by the Expert Group set up by the European Commission), the European the Parliament 2020 Resolution on Civil Liability for AI, the 2021 Draft AI Act, the 2022 Draft AI Liability Directive and the 2022 Draft Product Liability Directive.
Ana Taveira da Fonseca, Elsa Vaz de Sequeira, Luís Barreto Xavier

Open Access

Risks Associated with the Use of Natural Language Generation: Swiss Civil Liability Law Perspective
Abstract
The use and improvement of Natural-Language-Generation (NLG) is a recent development that is progressing at a rapid pace. Its benefits range from the easy deployment of auxiliary automation tools for simple repetitive tasks to fully functional advisory bots that can offer help with complex problems and meaningful solutions in various areas. With fully integrated autonomous systems, the question of errors and liability becomes a critical area of concern. While various ways to mitigate and minimize errors are in place and are being improved upon by utilizing different error testing datasets, this does not preclude significant flaws in the generated outputs.
From a legal perspective it must be determined who is responsible for undesired outcomes from NLG-algorithms: Does the manufacturer of the code bear the ultimate responsibility or is it the operator that did not take reasonable measures to minimize the risk of inaccurate or unwanted output? The answer to this question becomes even more complex with third parties interacting with a NLG-algorithm which may alter the outcomes. While traditional tort theory links liability to the possibility of control, NLG may be an application that ignores this notion since NLG-algorithms are not designed to be controlled by a human operator.
Marcel Lanz, Stefan Mijic

Open Access

AI Instruments for Risk of Recidivism Prediction and the Possibility of Criminal Adjudication Deprived of Personal Moral Recognition Standards: Sparse Notes from a Layman
Abstract
In what follows lies a recount of a concerned criminal lawyer, a layman, as he observes the change foreshadowed by AI in the field of individual risk recidivism assessment for the purposes of criminal penalty imposition on convicted felons. The text will therefore reflect upon the nature of that assessment when promoted by new AI programs based on actuarial-meaning statistically derived-information. It then proceeds to compare that risk recidivism assessment with the one undertaken within the current traditional human paradigm. Identifying the ensuing challenges set by the technological alternatives on the very survival of criminal law’s principiological mainstays. A final note will be drawn on what is lacking in the technological proposal, for all its technical upsides and perceived advantages. The approach here changes. From literature one will bring to the fore the very human account that lies at the center of anything resembling judgment. Both the judgment of the individual being assessed and the one of the court doing the assessment. Human as they both are, one heeds the kind of humanity an entire science—that of law—and its specific approach must acknowledge. Exactly that humanity that seems to be lacking in the technological AI proposals.
Pedro Garcia Marques

Open Access

The Relevance of Deepfakes in the Administration of Criminal Justice
Abstract
Nowadays, it is challenging to distinguish between genuine content created by humans or deepfake created by deepfakes algorithms. Therefore, it is in the interests of society and nations to have systems that can notice and evaluate the content without human intervention. This paper presents the challenges of artificial intelligence, specifically machine learning and deep learning, in the fight against deepfake. In addition, it presents the relevance that deepfakes may have in the administration of criminal justice.
Dalila Durães, Pedro Miguel Freitas, Paulo Novais

Open Access

Antitrust Law and Coordination Through Al-Based Pricing Technologies
Abstract
Price is the core element of commercial transactions and an important parameter of competition. One of antitrust law’s aims is to ensure that market prices form under the laws of supply and demand, and not after the whims of monopolists or cartelists. Innovations in computer and data science have brought about pricing technologies that rely on advanced analytics or machine learning (ML) techniques, which could strengthen existing bargaining power disparities in part by supporting price coordination among competitors.
Existing research establishes a theoretical framework for competitive harm through coordination, showing that pricing technologies can lead to near-cartel price levels while avoiding anti-cartel prohibitions. This contribution builds on that framework, taking into account up to date empirical, game-theoretic, and computer science literature on pricing technologies to produce a taxonomy of those technologies. We then employ a comparative approach to identify the legal effects of various pricing technologies at a more granular level under EU and US antitrust law. The contribution supports greater understanding between economists and policy-makers regarding the analysis and treatment of AI-based pricing technologies.
Maria José Schmidt-Kessen, Max Huffman

Open Access

The “Artificial Intelligence Act” Proposal on European e-Justice Domains Through the Lens of User-Focused, User-Friendly and Effective Judicial Protection Principles
Abstract
European e-Justice aims at developing electronic tools to allow national jurisdictions and ECJ to contact through reliable and secure digital channels. The 2019–2023 e-Justice Strategy underlined some new EU general principles directly developed under e-Justice paradigm, deserving particular attention the ones concerning user-focused and user-friendly dimensions. As 2021 is the year where justice digitalization will be under discussion, there is a need to understand how AI will impact on justice fields, not only in MS judicial systems (EU functional jurisdictions, when applying EU law), but also in ECJ, as this disruptive technology is being discussed. The Proposal for an AI Act stresses AI systems intended for the administration of justice should be classified as high-risk, considering their potentially significant impact on effective judicial protection domains. Therefore, this paper intends to understand the need to fully stress AI human-centric approach on justice fields, so effective judicial protection can be deepened through user-focused and user-friendly principles; and to scrutinize, from the e-Justice standpoint, how the Proposal for an AI Act must further address judicial instrumental usage of AI systems, so judicial independence, procedural rights and access to justice are observed in the EU jurisdictional setting.
Joana Covelo de Abreu

Open Access

The European Union’s Approach to Artificial Intelligence and the Challenge of Financial Systemic Risk
Abstract
This piece examines the EU’s ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence’ (‘AI Act’) with a view to determining the extent to which it addresses the systemic risk created by AI FinTech. Ultimately, it is argued that the notion of ‘high risk’ at the centre of the AI Act leaves out financial systemic risk. This exclusion can neither be justified by reasons of technology neutrality, nor by reasons of proportionality: neither is AI-driven financial systemic risk already covered by existing (or proposed) macroprudential frameworks and tools, nor can its omission from the AI Act be justified by the prioritisation of other types of risk. Moving forward, it is suggested that the EU’s AI Act would have benefited from a broader definition of ‘high risk’. It is also hoped that EU policy makers will soon begin to strengthen existing macroprudential toolkits to address the financial systemic risk created by AI.
Anat Keller, Clara Martins Pereira, Martinho Lucas Pires

Open Access

Regulating AI: Challenges and the Way Forward Through Regulatory Sandboxes
Abstract
Financial industry was the first filed where it became clear that we needed a new type of regulation, an evolutionary and anticipatory approach that can at least stand chance to mitigate the new risks posed by disruptive technologies such as artificial intelligence (AI). This approach took the shape of various tools, none of which has showed more prominence than the regulatory sandboxes. This rather young approach to regulation spread across various sectors and jurisdictions from FinTech to privacy and healthcare.
The European Commission recognised the potential of the regulatory sandboxes as an increasing compliance mechanism but also as a way to facilitate innovation and thus included them as part of the draft regulation on artificial intelligence (the AI Act). In this article we analyse the potential of the regulatory sandboxes for regulating AI in the format envisioned in Article 53 and 54 from the draft AI Act and the challenges this approach could face based on the experience from earlier regulatory sandboxes involving AI products or services. We also aim to suggest some tailor-made solutions that would mitigate potential disadvantages of the regulatory sandboxes for AI, including how to balance the emerging ‘Innovation Principle’ and the protection of human rights.
Katerina Yordanova, Natalie Bertels
Metadata
Title
Multidisciplinary Perspectives on Artificial Intelligence and the Law
Editors
Henrique Sousa Antunes
Pedro Miguel Freitas
Arlindo L. Oliveira
Clara Martins Pereira
Elsa Vaz de Sequeira
Luís Barreto Xavier
Copyright Year
2024
Electronic ISBN
978-3-031-41264-6
Print ISBN
978-3-031-41263-9
DOI
https://doi.org/10.1007/978-3-031-41264-6