Zum Inhalt

2025 | Buch

Emotional Data Applications and Regulation of Artificial Intelligence in Society

herausgegeben von: Rosa Ballardini, Rob van den Hoven van Genderen, Sari Järvinen

Verlag: Springer Nature Switzerland

Buchreihe : Law, Governance and Technology Series

insite
SUCHEN

Über dieses Buch

Dieses revolutionäre neue Buch über den Einsatz künstlicher Intelligenz bei der Verarbeitung menschlicher Emotionsdaten soll das Bewusstsein für das Thema schärfen, es aus einer multidisziplinären Perspektive gründlich diskutieren und auf diese Weise Forschungsergebnisse verbreiten, die die aktuellen und zukünftigen regulatorischen Erfordernisse für die verantwortungsvolle und ethische Entwicklung und Anwendung emotionaler künstlicher Intelligenz herausarbeiten. Biometrische und psychologische Daten des Menschen sind die sensibelsten Daten über menschliches Verhalten. Das Ziel des Buches ist es, ein ganzheitliches Verständnis der wichtigsten Herausforderungen zu vermitteln und neue, praktikable, substantielle und methodische Lösungen für die Bewältigung aktueller und zukünftiger rechtlicher und ethischer Bedürfnisse und Dilemmata in Bezug auf die Prozesse der Entwicklung und Nutzung emotionaler KI vorzuschlagen. Obwohl sowohl die akademische Gemeinschaft als auch politische Entscheidungsträger weiterhin intensiv über Fragen der ethischen Regulierung künstlicher Intelligenz diskutieren, gibt es immer noch ein sehr begrenztes Verständnis sowohl der Chancen als auch der Risiken im Zusammenhang mit emotionaler künstlicher Intelligenz im Speziellen. Doch emotionale KI ist einer der vielversprechendsten Bereiche für KI-Entwicklungen und -Anwendungen. Mehrere dieser Innovationen könnten eine willkommene Veränderung in unserer Gesellschaft sein, da sie unser Wohlergehen auf verschiedene Weise verbessern könnten. Allerdings erfordern diese Erfindungen beträchtliche Investitionen. Daher sind rechtliche Anreize wie Rechte des geistigen Eigentums entscheidend, um Investitionen in diesen Bereichen zu unterstützen. Darüber hinaus könnten diese Innovationen negative Auswirkungen auf die Privatsphäre und Autonomie natürlicher Personen haben und sowohl rechtliche als auch ethische Bedenken aufwerfen. Daher könnte ihre rechtliche und ethische Akzeptanz sowie ihre gesellschaftliche Akzeptanz durch mehrere rechtliche Bestimmungen in der EU in Frage gestellt werden, wie etwa die DSGVO, Bestimmungen zu Kommunikation, sozialen Plattformen und Marketing und den Entwurf eines KI-Gesetzes. Dennoch ist die aktuelle rechtliche Landschaft für emotionale KI in Europa alles andere als klar - ein Aspekt, der noch deutlicher wird, wenn wir das globale Bild des Regulierungsrahmens für emotionale KI betrachten. In diesem Buch widmet sich ein vielfältiges Team international anerkannter Experten diesen Fragen und führt eine multidisziplinäre Studie über techno-ökonomisch-rechtliche Entwicklungen in Bezug auf emotionale KI, ihre Auswirkungen und den Handlungsbedarf durch. Das Buch bietet profunde wissenschaftliche und gesellschaftlich relevante Einsichten in die Vergangenheit, Gegenwart und Zukunft der KI im Allgemeinen, insbesondere ihre vielen Implikationen für Recht und Politik. Obwohl die primäre Zielgruppe des Buches Akademiker aus dem Bereich des Rechts sind, bietet das Buch auch solide Anleitungen für Gesetzgeber und allgemeine politische Entscheidungsträger sowie Unternehmen und Organisationen.

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
This revolutionary new book on processing human data to infer information on emotions by artificial intelligence (AI) seeks to raise awareness, thoroughly discuss through a multidisciplinary perspective, and, in this way, disseminate research findings that elaborate on the topic of current and future regulatory needs for responsible and ethical developments and uses of emotional AI.
Rosa Ballardini, Rob van den Hoven van Genderen, Sari Järvinen

Technology and Business

Frontmatter
Adding Emotional Intelligence to Physical Spaces: Data-Driven Solutions for Measuring, Analysing, and Responding to User Needs and Expectations
Abstract
Although emotions can influence our living, experiences, and performance in different contexts, current immersive services in physical environments are failing to read, interpret and respond to our emotions. This study aims to provide understanding of how to enhance physical spaces with emotionally aware, intelligent, and integrated services meeting the expectations of the users. It focuses on three different environments (office, public space and transport) and defines the high-level requirements for the technologies needed for the implementation of emotionally intelligent services, as well as identifies the stakeholders involved in service creation and data processing. Service concepts were generated in collaboration with experts from academia and industry using co-creation methods. The industrial partners of this study see that the novel sensing capabilities and artificial emotional intelligence-based solutions for behaviour analysis and situational awareness have the potential to enable new services, for both the space operators and the users. For such services, the pipeline from data to information and actions must be streamlined and modular, which requires research on technologies, analytics methods, service, and business model design. Ultimately, such services will enable businesses to provide experiences that understand and respond to human behaviour and emotional responses in everyday life, while interacting with technology.
Sari Järvinen, Johanna Kallio
Research Challenges Along the Data Analysis Chain for Emotional AI
Abstract
Emotional AI represents a relatively new research area, and to date, its focus has been on the development of AI methods to recognise instantaneous human states, for example, whether a person is currently happy or experiencing acute stress. However, because the majority of emotion detection studies to date have been conducted in laboratory settings, many technologies are not yet sufficiently mature for real-world applications. For example, though truly emotionally intelligent AI should be convenient for the users, adaptive to their personalities (given that the expression of the same emotion can vary across individuals), and capable of also assessing long-lasting human conditions, such as chronic stress, research into these problems is only starting. This chapter offers an overview of the most popular data sources and AI methods and discusses the main challenges that must be addressed to develop emotional AI methods for long-term real-world use.
Elena Vildjiounaite, Julia Kantorovitch, Johanna Kallio, Vesa Kyllönen, Jenny Benois-Pineau
Explainable AI in Image Classification Tasks: How to Choose the Best?
Abstract
The most popular methods in AI-machine learning paradigm are mainly black boxes. For user acceptance of these methods in various tasks, explanations are necessary for why an AI tool takes such a decision, as it is a matter of user trust in AI. Although dedicated explanation tools are being massively developed, the users and decision makers do not know which are the most appropriate for them. In this chapter, we will propose a taxonomy of the AI-explanation methods. This contribution is aimed at helping users in their choice of explanation methods for AI (XAI).
Alexey Zhukov, Jenny Benois-Pineau, Romain Giot, Romain Bourqui

Law and Ethics for and with Emotional AI

Frontmatter
Artificial Intelligence, Threat or Enhancement of the Quality of Our Wellbeing and Personal Life?
Abstract
Although the AI processing of personal and, specifically, emotional data can enhance the wellbeing of people, it can also intrude on fundamental rights, such as physical and mental integrity, autonomy and privacy. In this respect, the main attention of European legislators as well as many scholars is directed at the risks and dangers of AI processing (See a.o.: https://​www.​pewresearch.​org/​internet/​2023/​06/​21/​themes-the-most-harmful-or-menacing-changes-in-digital-life-that-are-likely-by-2035/​.). Instruments such as the European Artificial Intelligence Act (AIA; Official version of the Act: https://​eur-lex.​europa.​eu/​eli/​reg/​2024/​1689/​oj) concentrate on risk assessment and avoiding the risks and perceived dangers of the use of AI systems, thereby negating the positive aspect in increasing wellbeing by AI.
Rob van den Hoven van Genderen
On AI That Knows How We Feel, Without Knowing Who We Are: EU Law and the Processing of Soft Biometric Data by Emotional AI
Abstract
Increasingly, peoples’ emotions are datafied through emotional AI. In recent years, the development and deployment of emotional AI has evolved rapidly. One category of data processed by emotional AI is biometric data: hard biometric data, relating to identified or identifiable individuals, and soft biometric data, that cannot identify or single out individuals. Both the processing of hard biometric data and soft biometric data allow to manipulate individuals, and as such may constitute a violation of fundamental rights: in particular, the right to human dignity; the right to a private and family life; and the right to the protection of personal data. Besides those human rights, at an EU level, hard biometric data fall within the scope of the General Data Protection Regulation (GDPR). By definition, as soft biometric data do not enable the identification of an individual, they remain outside the remit of the GDPR. In this chapter we argue that the European Union (EU) regulatory regime governing the usage of emotional AI technologies that process soft biometric data insufficiently safeguards the full exercise of the aforementioned fundamental rights. We analyse the AI Act, and the various inconsistencies it exhibits, prompting questions on its expected effectiveness in regulating the processing of soft biometric data by emotional AI.
Eline L. Leijten, Arno R. Lodder
Personal Data Protection in Emotional AI: The Facial Coding Example
Abstract
The processing of emotional data points is fast becoming a critical part of the Artificial Intelligence (AI) data processing component. As these emotional data points can identify natural persons, they thereby fall within the definition of personal data (at least) within the context of EU law. Facial coding involves the observation of a data subject’s facial expressions as a means of capturing their emotional reaction in any given time or situation. Ordinarily, processing activities which captured facial images are treated as sensitive personal data for reasons which include the capability of those images to reveal more data than intended. This chapter considers the applicability and impact of EU data protection law on emotional data points. By identifying the compliance gaps and the relationship and intersections between the General Data Protection Regulation (GDPR) and the Artificial Intelligence (AI) Act and proposing possible solution to identified gaps, the chapter attempts to analyse the compatibility of facial coding and data protection law. The chapter achieves this by using facial coding and the (facial) expressions collected from it as a use case. As a result, these considerations will highlight the relationship between emotional AI and data protection law while emphasising areas of divergence and convergence.
Emmanuel Salami
Consenting to the Use of Emotional Data by AI-Based Services
Abstract
In data protection law, consent is a broad and flexible legal basis. While the use of emotion-related personal data in various AI-based services could be based on consent, this involves several practical challenges and unresolved legal questions that extend beyond data protection to other areas of law. In this chapter, we discuss to what extent and under which circumstances consent can be relied upon by providers of emotional AI services and applications. First, we analyse the requirements set by EU data protection law regarding consent as a legal basis for processing emotional data, and the issues raised by the rules concerning automated decision-making in the context of emotional AI. Then, we examine the boundaries and additional requirements imposed by EU competition law and the AI Act regarding the use of emotional AI based on data subjects’ consent. Our analysis highlights challenges that arise particularly when emotional AI is used at workplaces or in publicly accessible spaces.
Juha Vesala, Juhana Riekkinen, Viivi Rankka
A Duty of Loyalty for Emotion Data
Abstract
Emotion data is dangerous because of how much it reveals about us and how deeply it is coveted by industry and government. This data—purporting to represent the emotional, psychological, or physical status of people by processing their expressions, movements, behaviour or other physical or mental characteristics—is irresistible for companies to use to their advantage. Having failed to recognize this danger, the law has left people open to the exploitative whims of industry and government who process emotion data. This chapter proposes a duty of emotion data loyalty to prohibit recipients of emotion data from acting in ways that conflict with the best interests of the trusting party. The idea behind loyalty is simple: organizations should not process emotion data or design technologies that conflict with the best interests of trusting parties. Our emotions should not be tools with which to manipulate us. Loyalty is the antidote to opportunism: it shifts the legal duty from self-serving to other-serving and has a morality broader than the profit-maximization of neoliberal capitalism. Though loyalty has deep roots in our law, there is more than just abstract ethics and notions of honour to the duty of emotion data loyalty. Loyalty compels firm legal duties and prohibitions that, when breached, give rise to legal liability on the grounds of conflicts of interest or duty.
Woodrow Hartzog, Neil Richards
TALKING TO STRANGERS. On the Use of Emotional AI to Prevent and Combat Online Grooming and Sex Chatting with Minors
Abstract
At the intersection of emerging AI technologies and the protection of minors against online sexual abuse, fascinating yet worrisome developments unfurl before our eyes. On the one hand, various new forms of online text-based sexual behaviour aimed at minors (online grooming, solicitation of minors, attempted online grooming and online sex chatting) are brought within the scope of criminal law, thus leading to forms of preventive or risk-based criminal law. On the other hand, various (emotional) AI technologies that process and analyse textual/linguistic data are more and more commonly employed as investigative tools to detect or prevent these kinds of (criminal) behaviour or catch online predators in the act. However, not only do these technologies still fall short in many respects and are therefore demonstrably ineffective but their use also poses a potential threat to fundamental rights and online values such as privacy, freedom of expression, confidentiality of communication, online security and safety, especially of the very children they are supposed to protect. This chapter offers a critical review of some of the possible consequences of AI-driven prevention and detection of online text-based sexual offenses, particularly by the use of police bots and large scale, automated scanning of online speech.
Anne de Hingh
AI in the Lecture Room: Analysing Two Use Cases in the Context of Higher Education
Abstract
There is artificial intelligence (AI) in the lecture room, and it cannot be ignored. With unprecedented speed, AI has made its way to the everyday practices of higher education. As such, automation and analytics are nothing new in the academic educational context. However, with more recent AI applications it has truly become impossible to ignore AI in everyday academia and to avoid rethinking pedagogical approaches and solutions. Moreover, following the COVID-19 pandemic, digital tools for higher education are now being systematically developed, with an emphasis on quality and wellbeing of students. At the same time, regulatory instruments for AI are being tailored on various levels.
This chapter discusses AI in higher education from a legal perspective while addressing specific use cases. The chapter briefly touches upon the role of emotions and interaction in learning, linking this to the features of AI in education. Then, the selected use cases are analysed from a regulatory point of view. The focus of this chapter is on EU level legal instruments, alongside various ethical and self-regulatory guidelines that frame the use of AI in education. In particular, the AI Act is analysed in the context of higher education, including the relevant human rights dimensions.
Anette Alén, Beata Mäihäniemi, Aušrinė Pasvenskienė
Fair and Sustainable Data Governance for Responsible Uses of Emotional Data
Abstract
In the data economy, setting internal standards and data policies for how data is gathered, stored, processed, and disposed has become both crucial and mandatory for companies and organizations. Thus, different models for data governance have been developed to support companies and organizations in their everyday data business. One prominent example is the Finnish Innovation Fund Sitra Rulebook model for a fair data economy, currently used by several data sharing networks across Europe. Although these models generally apply to data governance, whether and to what extent they are suitable for data governance of more specific contexts, like for emotional data, is questionable. Through scenario methodology, we address this issue to elaborate on how emotional data through AI will or can be developed and used in the next five years, and assess the suitability of the existing data governance models. We do this by contextualizing the applicability and validity of the Sitra rulebook model in the framework of our developed scenario. Ultimately, we provide new understanding on the benefits, but also possible limitations, of data current governance models in the context of the rapidly upcoming wave of emotional data and AI, while also reflecting on possible needs for further developments.
Rosa Ballardini, Olli Pitkänen
Emotional AI and the Consensus-Based Remuneration Regime in Southeast Asia
Abstract
Some emotional generative AI (GenAI) companies have acquired the power to extract copyright-protected works from anyone in their orbit without prior authorisation. Such monopolistic abuses could potentially disrupt the market for these copyrighted works, thereby forming the basis for numerous lawsuits across multiple jurisdictions. Indeed, the creative industry will be greatly affected by this phenomenon, let alone rights-holders in the Global South including the Association of Southeast Asian Nations (the ASEAN) who have been suffering from ineffective copyright enforcement and restricted freedom of expression for generations. This chapter discusses how to remunerate rights-holders whose work is unlawfully used to train GenAI systems, focusing on the ASEAN and how their non-interference principle (the ASEAN Way) could address this issue. Moreover, a limited Brussels Effect on Text- and Data-Mining (TDM) regulation in the ASEAN is presented. This chapter finds that the ASEAN Way remains relevant to serve as the foundation for remunerating rights-holders in the ASEAN. Furthermore, this chapter proposes that through the Initiation of GenAI Training Remuneration (TDR), stakeholders, following the spirit of ‘Gotong Royong’ in the Malay Archipelago, could achieve a consensus on a pro-rata model on the amount and method of remuneration for machine learning by emotional GenAI applications.
Artha Dermawan, Péter Mezei
Damage Caused by Emotional AI: Do Existing and Prospective Liability Rules Provide Sufficient Protection?
Abstract
Emotionally intelligent AI can process the most personal and intimate of information. When this is not handled with the utmost care, it can cause discrimination and stigmatization, violate a person’s privacy and it can cause other infringements of fundamental rights. This is particularly relevant in relationships with significant power asymmetry, such as that between an employer and an employee or a public authority and a citizen. In such cases, damage inflicted upon individuals is not tangible, meaning that it does not result in actual physical injury or damage to items of property, and often there is no consequential economic loss either. Still, the damage can cause serious distress to the affected person.
This chapter examines whether existing and prospective liability rules provide affected parties with sufficient protection. Relevant frameworks referenced will be the General Data Protection Regulation (GDPR), the AI Act, the proposed Directive on AI Liability and national civil liability laws of EU Member States. Special focus will be placed on the recoverability of immaterial harm. To date, many national laws—and courts—are reluctant to provide compensation for non-material harm. Therefore, the question is whether this reluctance is still appropriate considering the significant risks AI poses for fundamental rights and other immaterial interests.
Béatrice Schütte
Commercializing Emotions in a Lawful and Responsible Manner: The Limitations, Possibilities and Dilemmas in an Emotional AI Service Business Concept in the Workplace
Abstract
While commercial possibilities of emotional AI for businesses are manifold, so are the risks this technology brings. Currently, debate on legal and ethical aspects in the area of commercial emotional AI is ongoing and even a ban is being considered, potentially endangering innovation and the commercial uptake of emotional AI in the EU. It is safe to say that the provision of this technology takes place in a grey area at the moment, causing particular attention to legal and ethical dilemmas. This chapter is an explorative study about collision, dependency and compatibility of law, ethics and practice in the area of commercialization of certain type of emotional AI—emotion-aware systems at workplaces. If the processing of emotions—one’s own and others’—by people is already complex and at times risky, how can a company utilize emotional AI commercially in both a legally and ethically responsible manner? Firstly, this chapter explores how the relevant legislation in terms of AI balances the risks and possibilities of emotional AI. Following this, the chapter explores how a company can navigate in practice through the current obstacles in its way to the provision of lawful and responsible emotional AI services.
Amanda Drake
Design Science Methodology for AI-Based Contract Design Research
Abstract
Contract design matters. The literature confirms that different contract designs can induce varying emotions, feelings, behaviour, and views of the contractual relationship. Specifically, prevention- and promotion-framed contracts and contract clauses seem to induce different emotions, which have been shown to impact the development of trust and even relationship performance. In addition, as many traditional contracts lack a design perspective, they may induce many negative emotions, such as confusion and frustration, among contract users. On the other hand, well-designed contracts can empower and engage contract users.
Although (emotional) AI enables new venues for contract design and legal technology related research, it requires the adoption of new research methodologies. This chapter presents a novel methodology for legal research—design science methodology—that originates from computer science and serves a dual purpose: creating new scientific knowledge while developing and evaluating an information systems artefact. To illustrate how the methodology can be applied in practice, the chapter presents a research setting in which design science methodology can be used for developing and evaluating an AI-based contract design tool that is based on the guiding principles of proactive contract theory and that takes into account the social and psycho-cognitive effects of contracts.
Jouko Nuottila, Anna Hurmerinta-Haanpää, Hilja Autto
Backmatter
Metadaten
Titel
Emotional Data Applications and Regulation of Artificial Intelligence in Society
herausgegeben von
Rosa Ballardini
Rob van den Hoven van Genderen
Sari Järvinen
Copyright-Jahr
2025
Electronic ISBN
978-3-031-80111-2
Print ISBN
978-3-031-80110-5
DOI
https://doi.org/10.1007/978-3-031-80111-2