Skip to main content

2015 | Buch

A Construction Manual for Robots' Ethical Systems

Requirements, Methods, Implementations

insite
SUCHEN

Über dieses Buch

This book will help researchers and engineers in the design of ethical systems for robots, addressing the philosophical questions that arise and exploring modern applications such as assistive robots and self-driving cars.

The contributing authors are among the leading academic and industrial researchers on this topic and the book will be of value to researchers, graduate students and practitioners engaged with robot design, artificial intelligence and ethics.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Robots’ Ethical Systems: From Asimov’s Laws to Principlism, from Assistive Robots to Self-Driving Cars
Abstract
This chapter and the book’s content should aid you in choosing and implementing an adequate ethical system for your robot in its designated field of activity.
Robert Trappl

Requirements

Frontmatter
Chapter 2. Robot: Multiuse Tool and Ethical Agent
Abstract
In the last decade, research has increasingly focused on robots as autonomous agents that should be capable of adapting to open and changing environments. Developing, building and finally deploying technology of this kind require a broad range of ethical and legal considerations, including aspects regarding the robots’ autonomy, their display of human-like communicative and collaborative behaviour, their characteristics of being socio-technical systems designed for the support of people in need, their characteristics of being devices or tools with different grades of technical maturity, the range and reliability of sensor data and the criteria and accuracy guiding sensor data integration, interpretation and subsequent robot actions. Some of the relevant aspects must be regulated by societal and legal discussion; others may be better cared for by conceiving robots as ethically aware agents. All of this must be considered against steadily changing levels of technical maturity of the available system components. To meet this broad range of goals, results are taken up from three recent initiatives discussing the ethics of artificial systems: the EPSRC Principles of Robotics, the policy recommendations from the STOA project Making Perfect Life and the MEESTAR instrument. While the EPSRC Principles focus on the tool characteristics of robots from a producer, user and societal/legal point of view, STOA Making Perfect Life addresses the pervasiveness, connectedness and increasing imperceptibility of new technology. MEESTAR, in addition, takes an application-centric perspective focusing on assistive systems for people in need.
Brigitte Krenn
Chapter 3. Towards Human–Robot Interaction Ethics
Abstract
Social assistive robots are envisioned as supporting their users not only physically but also by communicating with them. Monitoring medication, reminders, etc., are typical examples of such tasks. This kind of assistance presupposes that such a robot is able to interact socially with a human. The issue that is discussed in this chapter is whether human–robot social interaction raises ethical questions that have to be dealt with by the robot. A tour d’horizon of possibly related fields of communication ethics allows to outline the distinctive features and requirements of such an “interaction ethics”. Case studies on conversational phenomena show examples of ethical problems on the levels of the “mechanics” of conversation, meaning-making, and relationship. Finally, the chapter outlines the possible connections between decision ethics and interaction ethics in a robot’s behaviour control system.
Sabine Payr
Chapter 4. Shall I Show You Some Other Shirts Too? The Psychology and Ethics of Persuasive Robots
Abstract
Social robots may provide a solution to various societal challenges (e.g. the aging society, unhealthy lifestyles, sustainability). In the current contribution, we argue that crucial in the interactions of social robots with humans is that social robots are always created to some extent to influence the human: Persuasive robots might (very powerfully) persuade human agents to behave in specific ways, by giving information, providing feedback and taking over actions, serving social values (e.g. sustainability) or goals of the user (e.g. therapy adherence), but they might also serve goals of their owners (e.g. selling products). The success of persuasive robots depends on the integration of sound technology, effective persuasive principles and careful attention to ethical considerations. The current chapter brings together psychological and ethical expertise to investigate how persuasive robots can influence human behaviour and thinking in a way that is (1) morally acceptable (focusing on user autonomy, using deontological theories as a starting point for ethical evaluation) and (2) psychologically effective (focusing on effectiveness of persuasive strategies). These insights will be combined in a case study analysing the moral acceptability of persuasive strategies that a persuasive robot might employ while serving as a clothing store clerk.
Jaap Ham, Andreas Spahn

Methods

Frontmatter
Chapter 5. Ethical Regulation of Robots Must Be Embedded in Their Operating Systems
Abstract
The authors argue that unless computational deontic logics (or, for that matter, any other class of systems for mechanizing moral and/or legal principles) or achieving ethical control of future AIs and robots are woven into the operating-system level of such artifacts, such control will be at best dangerously brittle.
Naveen Sundar Govindarajulu, Selmer Bringsjord
Chapter 6. Non-monotonic Resolution of Conflicts for Ethical Reasoning
Abstract
This chapter attempts to specify some of the requirements of ethical robotic systems. It begins with a short story by John McCarthy entitled, “The Robot and the Baby,” that shows how difficult it is for a rational robot to be ethical. It then characterizes the different types of “ethical robots” to which this approach is relevant and the nature of ethical questions that are of concern. The second section distinguishes between the different aspects of ethical systems and attempts to focus on ethical reasoning. First, it shows that ethical reasoning is essentially non-monotonic and then that it has to consider the known consequences of actions, at least if we are interested in modeling the consequentialist ethics. The two last sections, i.e., the third and the fourth, present different possible implementations of ethical reasoners, one being based on ASP (answer set programming) and the second on the BDI (belief, desire, intention) framework for programming agents.
Jean-Gabriel Ganascia
Chapter 7. Grafting Norms onto the BDI Agent Model
Abstract
This chapter proposes an approach on the design of a normative rational agent based on the Belief-Desire-Intention model. Starting from the famous BDI model, an extension of the BDI execution loop will be presented; this will address such issues as norm instantiation and norm internalization, with a particular emphasis on the problem of norm consistency. A proposal for the resolution of conflicts between newly occurring norms, on one side, and already existing norms or mental states, on the other, will be described. While it is fairly difficult to imagine an evaluation for the proposed architecture, a challenging scenario inspired from science-fiction literature will be used to give the reader an intuition of how the proposed approach will deal with situations of normative conflicts.
Mihnea Tufiş, Jean-Gabriel Ganascia

Implementations

Frontmatter
Chapter 8. Constrained Incrementalist Moral Decision Making for a Biologically Inspired Cognitive Architecture
Abstract
Although most cognitive architectures, in general, and LIDA, in particular, are still in the early stages of development and still far from being adequate bases for implementations of human-like ethics, we think that they can contribute to the understanding, design, and implementation of constrained ethical systems for robots, and we hope that the ideas outlined here might provide a starting point for future research.
Tamas Madl, Stan Franklin
Chapter 9. Case-Supported Principle-Based Behavior Paradigm
Abstract
We assert that ethical decision-making is, to a degree, computable. Some claim that no actions can be said to be ethically correct because all value judgments are relative either to societies or individuals. We maintain, however, along with most ethicists, that there is agreement on the ethically relevant features in many particular cases of ethical dilemmas and on the right course of action in those cases. Just as stories of disasters often overshadow positive stories in the news, so difficult ethical issues are often the subject of discussion rather than those that have been resolved, making it seem as if there is no consensus in ethics. Although, admittedly, a consensus of ethicists may not exist for a number of domains and actions, such a consensus is likely to emerge in many areas in which intelligent autonomous systems are likely to be deployed and for the actions they are likely to undertake.
Michael Anderson, Susan Leigh Anderson
Chapter 10. The Potential of Logic Programming as a Computational Tool to Model Morality
Abstract
We investigate the potential of logic programming (LP) to computationally model morality aspects studied in philosophy and psychology. We do so by identifying three morality aspects that appear in our view amenable to computational modeling by appropriately exploiting LP features: dual-process model (reactive and deliberative) in moral judgment, justification of moral judgments by contractualism, and intention in moral permissibility. The research aims at developing an LP-based system with features needed in modeling moral settings, putting emphasis on modeling these abovementioned morality aspects. We have currently co-developed two essential ingredients of the LP system, i.e., abduction and logic program updates, by exploiting the benefits of tabling features in logic programs. They serve as the basis for our whole system, into which other reasoning facets will be integrated, to model the surmised morality aspects. We exemplify two applications pertaining moral updating and moral reasoning under uncertainty and detail their implementation. Moreover, we touch upon the potential of our ongoing studies of LP-based cognitive features for the emergence of computational morality, in populations of agents enabled with the capacity for intention recognition, commitment, and apology. We conclude with a “message in a bottle” pertaining to this bridging of individual and population computational morality via cognitive abilities.
Ari Saptawijaya, Luís Moniz Pereira
Metadaten
Titel
A Construction Manual for Robots' Ethical Systems
herausgegeben von
Robert Trappl
Copyright-Jahr
2015
Electronic ISBN
978-3-319-21548-8
Print ISBN
978-3-319-21547-1
DOI
https://doi.org/10.1007/978-3-319-21548-8