Zum Inhalt

EU Digital Law in the AI Era

  • 2025
  • Buch

Über dieses Buch

Dieses Buch beschäftigt sich mit dem sich wandelnden rechtlichen Terrain der Europäischen Union im Lichte der jüngsten Fortschritte in der generativen künstlichen Intelligenz. Sie reflektiert die Art und Weise, wie bestehende rechtliche Rahmenbedingungen, die über mehrere Jahrzehnte entwickelt wurden, durch Technologien getestet werden, die mit zunehmender Autonomie, Komplexität und Undurchsichtigkeit operieren. Anstatt KI als singuläres Phänomen zu behandeln, vereint der Band unterschiedliche Perspektiven, um zu untersuchen, wie das Recht in verschiedenen Bereichen wie Haftung, Datenschutz, geistiges Eigentum und digitale Sicherheit reagiert. Die Beiträge schöpfen aus einer Reihe von Rechtstraditionen und kritischen Ansätzen. Einige Kapitel konzentrieren sich darauf, wie sich die Regeln für fehlerhafte Produkte entwickeln müssen, um autonomes Verhalten zu berücksichtigen. Andere untersuchen neue Formen der Arbeitsverwaltung, die durch automatisierte Entscheidungsfindung geprägt sind, sowie Bedenken hinsichtlich der Privatsphäre, die neuartige Schutzmechanismen erfordern, die auf die sich entwickelnden Kapazitäten von KI-Systemen zugeschnitten sind. Das Buch wendet sich auch dem Bereich des Datenschutzes zu und beobachtet, wie sich die gerichtliche Argumentation gemäß der DSGVO an die Anforderungen des maschinellen Lernens anpasst. Ergänzt wird dies durch eine Reflexion über die Rolle des Designs bei der Gewährleistung von Transparenz und Rechenschaftspflicht innerhalb digitaler urbaner Infrastrukturen. Das Urheberrecht wird in einem Kapitel behandelt, das untersucht, wie der Aufstieg künstlich erzeugter Inhalte langjährige Konzepte von Urheberschaft, Originalität und Eigentum aus den Angeln hebt. Sie hinterfragt, ob die gegenwärtigen Regelungen zum Schutz geistigen Eigentums angemessen bleiben, und betrachtet die rechtlichen Auswirkungen der Verwendung von Daten als eine Form der Vergütung auf digitalen kreativen Märkten. Der Verbraucherschutz wird durch eine neue Brille betrachtet, insbesondere dort, wo KI-Systeme Verträge, Entscheidungen oder Schwachstellen beeinflussen. Die Diskussion umfasst frühe Einblicke in Vorschläge zur Haftung für künstliche Intelligenz und berücksichtigt die Risiken, denen die Nutzer in zunehmend automatisierten Umgebungen ausgesetzt sind. Auch rechtliche Reaktionen auf Cyberkriminalität und digitale Überwachung werden diskutiert, darunter der Ort von Abhörmaßnahmen und die Zulässigkeit elektronischer Beweismittel. Die allgemeineren Implikationen großmaßstäblicher Sprachmodelle für digitale Widerstandsfähigkeit und öffentliche Sicherheit werden kritisch bewertet. Anstatt eine feste Landkarte anzubieten, fördert der Band eine dynamischere Lesart des europäischen Digitalrechts im Zeitalter der KI. Sie spricht mit Anwälten, Forschern und institutionellen Akteuren, die sich damit beschäftigen, wie Rechtsordnungen an grundlegenden Prinzipien festhalten und sich gleichzeitig an die Realitäten neuer Technologien anpassen können. Jedes Kapitel lädt den Leser ein, nicht nur darüber nachzudenken, wo das Recht steht, sondern auch darüber, wohin es als nächstes gehen kann und sollte.

Inhaltsverzeichnis

  1. Frontmatter

  2. Introduction

    Philippe Jougleux, Eleni-Tatiani Synodinou, Christiana Markou, Thalia Prastitou Merdi
    Abstract
    The legal framework surrounding digital transformation has, in its relatively short history, already undergone profound shifts. Disruptive technologies have steadily emerged since the 1980s, starting with the rise of software as a foundational architecture of a rapidly evolving digital economy. This technological momentum gained further traction with the popularization of the internet in the 1990s, which drastically altered social, commercial, and legal landscapes worldwide. The turn of the century brought social media, which reshaped social dynamics and introduced a new level of complexity to issues of privacy, information dissemination, and freedom of expression.
  3. The AI Liability Puzzle: Rethinking Defective Product Liability for AI

    Teresa Rodríguez-de-las-Heras Ballell
    Abstract
    The impetus of AI is revealing numerous, incredibly diverse, and cross-sector applications. The benefits associated with the use of AI and expected from their systematic and extensive application are multiple, extremely promising, and to a certain extent overwhelmingly positive. Nevertheless, the expansive and growing use of AI in our society can also be a source of new risks, lead to undesired outcomes and unintended consequences, or raise legal concerns and social challenges of many different kinds. In the face of such potentially negative effects, the fundamental question is whether traditional legal regimes are equipped to manage the risks and effectively resolve the conflicts arising from these situations in complex technological environments. The adequacy and completeness of civil liability regimes in the face of technological challenges have an extraordinary societal relevance. Should the liability system reveal insufficiencies, flaws and gaps in dealing with damages caused by AI, in particular, victims can remain uncompensated or, at least, only partly compensated. The social impact of a potential inadequacy of existing legal regimes to address new risks created by AI might then compromise the expected benefits. This Chapter discusses first the inadequacies detected in the existing civil liability regimes in the face of AI to explore the different policy options to consider with the aim of accommodating the liability system to scenarios of damages caused by, or with the intervention of, AI systems. Subsequently, the Chapter analyses the European Union’s response to the AI liability challenges with the adoption of two legislative proposals to accommodate product liability rules as well as some civil liability ones to damages caused by, or with the intervention of, AI systems (the latter was finally withdrawn in 2025). The Chapter focuses on the finally adopted revision of the Product Liability Directive, identifying the main challenges, and exploring the primary solutions, and elaborates on the idea that the revised rules play a key role in the policy strategy to resolve the AI Liability puzzle.
  4. Regulating Algorithmic Management of Work at EU Level: The Adequacy of the Platform Work Directive and the AI Act

    Stamatina Yannakourou
    Abstract
    Algorithmic management is rising with an increasing spread both in the gig and regular economies and is becoming a central point of academic analysis and EU regulatory interventions. The European Commission has adopted, in 2024, two novel pieces of legislation which impact on algorithmic management of work, the Platform Work Directive (hereinafter ‘PWD’) (Directive (EU) 2024/2831 of the European Parliament and of the Council of 23 October 2024 on improving working conditions in platform work, [2024] OJ L, 11.11.2024, pp. 1–26) and the Artificial Intelligence Act (hereinafter ‘AI Act’) (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act), [2024] OJ L., 12.7.2024, pp. 1–144). In the context of these legal initiatives, a crucial question arises: are such regulatory tools adequate to tackle the harms stemming from algorithmic management practices for workers’ fundamental rights (e.g. right to dignity, right to privacy, right to equal treatment, freedom of association)? To answer this question, the chapter examines the content and limits of each of the two regulatory texts, in the light of their different conceptual approach, juxtaposing a traditional in law rights-based method to a less conventional risk-based method. It concludes that the risk regulation driven rationale weakens the protection of the workers’ rights and, thus, creates a need for a complementary rights-based and civil recourse legislation to tackle the use of AI in an employment context.
  5. EU Digital Law in the Artificial Intelligence (AI) Era: Towards a New Privacy Maturity Methodology

    Kosmas Pipyros
    Abstract
    Article 35 of the GDPR lays down the legal obligation of data controllers to execute a Data Protection Impact Assessment (DPIA) when processing activities are likely to pose a significant risk to individual rights and freedoms. The GDPR outlines only the fundamental requirements for performing a DPIA, without detailing the process for its execution. Moreover, while National Privacy Authorities globally (such as CNIL, ICO., etc.) have issued guidelines for (D)PIAs, these primarily take the form of checklists and lack a full-fledged framework for step-by-step implementation of a DPIA and for evaluating the overall maturity level of an organization.
    This chapter introduces a novel, multi-faceted approach to privacy maturity that serves as a mechanism for the assessment and management of data protection risks. The methodology is designed to unlock the full potential of DPIAs in safeguarding fundamental rights as envisaged by the GDPR and to act as an all-encompassing instrument for legal compliance. This is achieved by integrating the GDPR’s legal requirements and best practices, with a qualitative and quantitative methods of analysis drawn from the field of information security.
  6. The Impact of AI on Data Protection: Evolution of Court of Justice of the European Union Case Law Regarding the General Data Protection Regulation (GDPR) in the Artificial Intelligence Era

    Athina Moraiti, Charalampos Stamelos
    Abstract
    This paper explores the complex relationship between AI and data privacy, with a focus on the implications of GDPR principles and pertinent case law from the Court of Justice of the European Union (CJEU). Even if the GDPR doesn’t specifically mention AI, its rules for fair and transparent data handling still apply to AI applications. CJEU rulings shed light on GDPR compliance in the context of AI technologies. Organizations must implement specialized techniques that handle the unique problems presented by AI systems to successfully navigate the junction of AI and data protection.
    Organizations must implement specialized techniques that handle the unique problems presented by AI systems to successfully navigate the junction of AI and data protection. Businesses can preserve people’s rights while utilizing AI by putting strong organizational and technical safeguards in place, such as encryption, access limits, and frequent audits, as specified by the GDPR.
    Finally, adhering to GDPR principles promotes ethical AI deployment, trust, and transparency, all of which support sustainable innovation and the welfare of society. Compliance with GDPR principles not only fosters trust and transparency but also ensures ethical AI deployment, ultimately contributing to sustainable innovation and societal well-being.
    Recent CJEU case law has focused on delineating legitimate bases for processing personal data in the context of AI, elucidating the requirements for international data transfers, and clarifying the roles and responsibilities of data controllers and processors.
  7. Building Transparent Smart Cities: Informational Self-Determination in Interconnected Ecosystems

    Apostolos Vorras, Vasileios Karkatzounis, Lilian Mitrou
    Abstract
    Smart city models rely significantly on crowd and people surveillance, namely on systems collecting information about citizens’ location in near-real time using signal-based sensors or other techniques and processing this data to generate insights about the movement and behavior of people within a specific area in order to facilitate data-driven decision-making. While these data-gathering innovative applications can offer valuable insights for business management, the fact that they capture and process data in a seamless and often invisible way undermines the ability of individuals to effectively identify intrusive practices and associated privacy risks. Continuous tracking and analysis of individuals’ movements may result in the creation of behavior-based profiles, raising severe concerns about potential misuse, from targeted advertising and privacy violations to discrimination and security risks. This paper discusses, in light of the European privacy regulatory framework, the application of the transparency principle in the interconnected data ecosystem of smart cities, where various technologies, sensors, devices, and databases are integrated in order to collect, analyze and share data for urban management and improvement.
  8. The EU Right of Communication to the Public Against Creativity in the Digital World: A Conflict at the Crossroads?

    Zoi Krokida
    Abstract
    This article discusses the EU right of communication to the public and its application in the digital word. More specifically, it critically examines the application of the right in the context of linking activities and online platforms’ regulatory framework and then addresses its implications for online creativity. Having discussed and identified potential discrepancies, the article argues that a balanced interpretation of the act of communication to the public is needed. This could be achieved through a narrow interpretation of the right of communication to the public. In that way, the fundamental rights of the parties involved would be taken into consideration.
  9. IP Protection for AI-Generated Content in the “Post - AI Act” Era

    Eleni Tzoulia
    Abstract
    This chapter examines the provisions of the AI Act pertaining to generative AI and identifies their implications for the legal treatment of AI-generated content. Arguably, the EU acknowledges the nature of such subject matter as a copyright-ineligible private asset and implements targeted measures to address the corresponding challenges. Starting from this premise, the chapter explores the proprietary status of AI-generated content “de lege lata” and discerns inconsistencies which require the intervention of IP law. In this respect, the study advocates for the institution of a sui generis IP right and endeavors to outline its particularities, e.g., the appropriate personal, material, and territorial scope of application, as well as the term of protection. The suggested approach aligns with the regulatory model introduced by article 15 Directive (EU) 2019/790.
  10. Data as Remuneration in Digital Copyright Licensing: Some Reflections on the Concept of ‘Appropriate and Proportionate Remuneration’ Under Art. 18 EU Directive 2019/790 in the Data Era

    Theodoros Chiou
    Abstract
    Data are the new “oil” of the twenty-first century, according to a widely used metaphor. Their undeniable economic value in a data-driven economy converts data into a tradable commodity. Given the economic dimension of data and their overall importance in the data era, this chapter discusses whether data may constitute a legally acceptable form of author’s remuneration in the context of licensing agreements under EU Copyright Law and, if so, under which requirements. The question is tackled under the light of Art. 18 EU Directive 2019/790. A general overview of the contours of the “principle of appropriate and proportionate remuneration” is offered, prior to examining whether the concept of “remuneration” included thereto accommodates non-monetary considerations, such as data and under which circumstances data could be possibly qualified as “appropriate and proportionate remuneration”.
  11. Vulnerable Consumers in the Digital Era: How the UCTD Can Evolve to Combat Tech Exploitation?

    Mateja Durovic, Matthew Dacoronias-Marina
    Abstract
    The chapter discusses the adequacy of the UCTD to adapt to the technological changes in light of the recent cases against tech giants like TikTok, Meta, and X. At first, it showcases the differences between UCTD and other EU Regulations on consumer protection, such as Digital Markets Act and the General Data Protection Regulation, and their advantages and disadvantages. The main focus is analysing the protection framework of vulnerable groups, i.e. minors, against unfair contract terms, exploitation of their vulnerability for advertising purposes and data privacy. The concerns on TikTok’s privacy policies and terms are analysed while focusing on the legal battles fought in and out of the EU, as well as the company’s response to address the regulators’ concerns. Finally, the document suggests how the UCTD could be reformed to address the challenges of this new digital era mainly through three changes; the first one is the upgrade of the indicative list of unfair terms into a blacklist and the creation of a grey list, the second is to ensure transparency through better means of presentation, and third, the implementation of a personalised tiered disclosure system. These changes could better safeguard consumers in the digital marketplace, especially vulnerable groups of consumers.
  12. Consumer Protection in the AI Era. A First Reading of the Proposal for the AI Liability Directive

    Eleftheria (Ria) Papadimitriou
    Abstract
    In the light of the revised Product Liability Directive (PLD) the European Commission has published the proposal for the Artificial Intelligence Liability Directive (hereinafter the AILD). The latter focuses on the adaptation of non-contractual civil liability rules to artificial intelligence (AI). The first legislative step towards the regulation of AI within EU came in July 2024 with the Regulation on artificial intelligence (AI Act). Both legislative instruments set new rules in the field of law and AI, which shall affect both businesses and consumers at least within the EU. The AILD’s objective is to provide legal certainty and prevent fragmentation on non-contractual civil liability rules across EU. In this light, the paper focuses on the analysis of the new two mechanisms introduced by the AILD, namely the evidence and the presumption of non-compliance (art. 3AILD) and the rebuttable presumption of a causal link in the case of fault (art. 4 AILD). The aim is to evaluate the AILD from a consumer’s law perspective and to examine the applicability of these two new mechanisms in practice with the consumer being the main point of reference of this examination.
  13. Lawful Interception of Communications, Serious Crimes and the EU Law

    Philippe Jougleux
    Abstract
    This chapter analyzes the legal framework of lawful interceptions by Law Enforcement Agencies (LEAs) in the EU. Firstly, it describes it as an exception to the principle of confidentiality of electronic private communications and presents the democratic safeguards attached to its regime. Subsequently, it discusses the present crisis of lawful interception in the digital era, highlighting weaknesses in both the practical and theoretical frameworks. It demonstrates that the currently advanced tools used by the LEAs to circumvent cryptographic protection exist within a legal grey area. Lastly, it delves into the rise of the concept of a “serious offense” as an additional democratic safeguard for the regime. The chapter argues that this concept is inherently flawed and that other democratic safeguards, such as the generalization of information about the data subject at the conclusion of the interception, should be considered instead.
  14. e-Evidence Regulation: A Contemporary Trojan Horse in Criminal Proceedings?

    Vagia Polyzoidou
    Abstract
    The e-Evidence package (including Regulation (EU) 2023/1543 and Directive (EU) 2023/1544), adopted in July 2023 and set to enter into force in August 2026, marks a significant milestone in the field of cross-border access to electronic evidence in criminal proceedings. This Chapter presents the key elements of the new regulatory framework, outlining the reasons that necessitated the development of these new mechanisms. It briefly discusses the lengthy process that led to the completion of the existing legal framework based on International Judicial Cooperation and Mutual Legal Assistance, as well as the rethinking of the concept of Mutual Trust through the model of direct cooperation between private entities (ISPs) and third-party states. Additionally, it raises critical questions regarding the compatibility and consistency of certain aspects of the new “Absolute Mutual Trust” system with EU fundamental principles and human rights, which emerge both from the doctrinal (European) Criminal Law (e.g., legal interests at stake) and Procedural Criminal Law (e.g., the right to a fair trial and effective remedies). These concerns arise from the legal efforts to address issues such as permanent data storage and the “absence” of non-EU service providers, ultimately aiming to facilitate investigations and prosecutions by any means.
  15. ChatGPT, a Life-Changing Phenomenon with Cyber-Security Implications

    Yianna Danidou
    Abstract
    Large Language Models (LLMs) are extremely popular nowadays due to their power to recognize, summarize, translate, predict and generate text and other content. These capabilities have paved the way for the development of exceptional real-life applications. This paper attempts to present, using ChatGPT only as one example out of the many LLMs that currently exist or are under development, the several advantages that these applications can provide, but at the same time the concerns that their use or their development might raise in terms of cybercriminality. The central argument of this paper is that generative AI is not a magical solution. If deployed maliciously or without proper diligence, generative AI applications could cause unfathomable damage. Our analysis categorizes AI language models-related crime into two large groups—crimes with AI language models, and crimes against/on AI language models. These categories highlight serious flaws that can be associated with cybersecurity issues. Additionally, this paper aims to examine whether the EU’s proposed AI Act, which represents the first-ever legal framework on AI, adequately addresses crimes involving or targeting AI language models. Furthermore, this analysis aims to investigate whether the EU Directive on liability for defective products equates AI language models with other types of software, a position we contend is not appropriate.
Titel
EU Digital Law in the AI Era
Herausgegeben von
Tatiana-Eleni Synodinou
Philippe Jougleux
Christiana Markou
Thalia Prastitou-Merdi
Copyright-Jahr
2025
Electronic ISBN
978-3-031-96743-6
Print ISBN
978-3-031-96742-9
DOI
https://doi.org/10.1007/978-3-031-96743-6

Die PDF-Dateien dieses Buches wurden gemäß dem PDF/UA-1-Standard erstellt, um die Barrierefreiheit zu verbessern. Dazu gehören Bildschirmlesegeräte, beschriebene nicht-textuelle Inhalte (Bilder, Grafiken), Lesezeichen für eine einfache Navigation, tastaturfreundliche Links und Formulare sowie durchsuchbarer und auswählbarer Text. Wir sind uns der Bedeutung von Barrierefreiheit bewusst und freuen uns über Anfragen zur Barrierefreiheit unserer Produkte. Bei Fragen oder Bedarf an Barrierefreiheit kontaktieren Sie uns bitte unter accessibilitysupport@springernature.com.