Skip to main content
Top

2019 | Book

Explainable, Transparent Autonomous Agents and Multi-Agent Systems

First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019, Revised Selected Papers

Editors: Dr. Davide Calvaresi, Amro Najjar, Prof. Michael Schumacher, Kary Främling

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the First International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2019, held in Montreal, Canada, in May 2019.

The 12 revised and extended papers presented were carefully selected from 23 submissions. They are organized in topical sections on explanation and transparency; explainable robots; opening the black box; explainable agent simulations; planning and argumentation; explainable AI and cognitive science.

Table of Contents

Frontmatter

Explanation and Transparency

Frontmatter
Towards a Transparent Deep Ensemble Method Based on Multiagent Argumentation
Abstract
Ensemble methods improve the machine learning results by combining different models. However, one of the major drawbacks of these approaches is their opacity, as they do not provide results explanation and they do not allow prior knowledge integration. As the use of machine learning increases in critical areas, the explanation of classification results and the ability to introduce domain knowledge inside the learned model have become a necessity. In this paper, we present a new deep ensemble method based on argumentation that combines machine learning algorithms with a multiagent system in order to explain the results of classification and to allow injecting prior knowledge. The idea is to extract arguments from classifiers and combine the classifiers using argumentation. This allows to exploit the internal knowledge of each classifier, to provide an explanation for the decisions and facilitate integration of domain knowledge. The results demonstrate that our method effectively improves deep learning performance in addition to providing explanations and transparency of the predictions.
Naziha Sendi, Nadia Abchiche-Mimouni, Farida Zehraoui
Effects of Agents’ Transparency on Teamwork
Abstract
Transparency in the field of human-machine interaction and artificial intelligence has seen a growth of interest in the past few years. Nonetheless, there are still few experimental studies on how transparency affects teamwork, in particular in collaborative situations where the strategies of others, including agents, may seem obscure.
We explored this problem using a collaborative game scenario with a mixed human-agent team. We investigated the role of transparency in the agents’ decisions, by having agents that reveal and tell the strategies they adopt in the game, in a manner that makes their decisions transparent to the other team members. The game embraces a social dilemma where a human player can choose to contribute to the goal of the team (cooperate) or act selfishly in the interest of his or her individual goal (defect). We designed a between-subjects experimental study, with different conditions, manipulating the transparency in a team. The results showed an interaction effect between the agents’ strategy and transparency on trust, group identification and human-likeness. Our results suggest that transparency has a positive effect in terms of people’s perception of trust, group identification and human likeness when the agents use a tit-for-tat or a more individualistic strategy. In fact, adding transparent behaviour to an unconditional cooperator negatively affects the measured dimensions.
Silvia Tulli, Filipa Correia, Samuel Mascarenhas, Samuel Gomes, Francisco S. Melo, Ana Paiva

Explainable Robots

Frontmatter
Explainable Multi-Agent Systems Through Blockchain Technology
Abstract
Advances in Artificial Intelligence (AI) are contributing to a broad set of domains. In particular, Multi-Agent Systems (MAS) are increasingly approaching critical areas such as medicine, autonomous vehicles, criminal justice, and financial markets. Such a trend is producing a growing AI-Human society entanglement. Thus, several concerns are raised around user acceptance of AI agents. Trust issues, mainly due to their lack of explainability, are the most relevant. In recent decades, the priority has been pursuing the optimal performance at the expenses of the interpretability. It led to remarkable achievements in fields such as computer vision, natural language processing, and decision-making systems. However, the crucial questions driven by the social reluctance to accept AI-based decisions may lead to entirely new dynamics and technologies fostering explainability, authenticity, and user-centricity. This paper proposes a joint approach employing both blockchain technology (BCT) and explainability in the decision-making process of MAS. By doing so, current opaque decision-making processes can be made more transparent and secure and thereby trustworthy from the human user standpoint. Moreover, several case studies involving Unmanned Aerial Vehicles (UAV) are discussed. Finally, the paper discusses roles, balance, and trade-offs between explainability and BCT in trust-dependent systems.
Davide Calvaresi, Yazan Mualla, Amro Najjar, Stéphane Galland, Michael Schumacher
Explaining Sympathetic Actions of Rational Agents
Abstract
Typically, humans do not act purely rationally in the sense of classic economic theory. Different patterns of human actions have been identified that are not aligned with the traditional view of human actors as rational agents that act to maximize their own utility function. For instance, humans often act sympathetically – i.e., they choose actions that serve others in disregard of their egoistic preferences. Even if there is no immediate benefit resulting from a sympathetic action, it can be beneficial for the executing individual in the long run. This paper builds upon the premise that it can be beneficial to design autonomous agents that employ sympathetic actions in a similar manner as humans do. We create a taxonomy of sympathetic actions, that reflects different goal types an agent can have to act sympathetically. To ensure that the sympathetic actions are recognized as such, we propose different explanation approaches autonomous agents may use. In this context, we focus on human-agent interaction scenarios. As a first step towards an empirical evaluation, we conduct a preliminary human-robot interaction study that investigates the effect of explanations of (somewhat) sympathetic robot actions on the human participants of human-robot ultimatum games. While the study does not provide statistically significant findings (but notable differences), it can inform future in-depth empirical evaluations.
Timotheus Kampik, Juan Carlos Nieves, Helena Lindgren
Conversational Interfaces for Explainable AI: A Human-Centred Approach
Abstract
One major goal of Explainable Artificial Intelligence (XAI) in order to enhance trust in technology is to enable the user to enquire information and explanation directly from an intelligent agent. We propose Conversational Interfaces (CIs) to be the perfect setting, since they are intuitive for humans and computationally processible. While there are many approaches addressing technical and agent related issues of this human-agent communication problem, the user perspective appears to be widely neglected. With the goal of better requirement understanding and identification of implicit user expectations, a Wizard of Oz (WoZ) experiment was conducted, where participants tried to elicit basic information from a pretended artificial agent via Conversational Interface (What are your capabilities?). Chats were analysed by means of Conversation Analysis, where the hypothesis that users pursue fundamentally different strategies could be verified. Stated results illustrate the vast variety in human communication and disclose both requirements of users and obstacles in the implementation of protocols for interacting agents. Finally, we inferred essential indications for the implementation of such a CI. The findings show that existing intent-based design of Conversational Interfaces is very limited, even in a well-defined task-based interaction.
Sophie F. Jentzsch, Sviatlana Höhn, Nico Hochgeschwender

Opening the Black Box

Frontmatter
Explanations of Black-Box Model Predictions by Contextual Importance and Utility
Abstract
The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.
Sule Anjomshoae, Kary Främling, Amro Najjar
Explainable Artificial Intelligence Based Heat Recycler Fault Detection in Air Handling Unit
Abstract
We are entering a new age of AI applications where machine learning is the core technology but machine learning models are generally non-intuitive, opaque and usually complicated for people to understand. The current AI applications inability to explain is decisions and actions to end users have limited its effectiveness. The explainable AI will enable the users to understand, accordingly trust and effectively manage the decisions made by machine learning models. The heat recycler’s fault detection in Air Handling Unit (AHU) has been explained with explainable artificial intelligence since the fault detection is particularly burdensome because the reason for its failure is mostly unknown and unique. The key requirement of such systems is the early diagnosis of such faults for its economic and functional efficiency. The machine learning models, Support Vector Machine and Neural Networks have been used for the diagnosis of the fault and explainable artificial intelligence has been used to explain the models’ behaviour.
Manik Madhikermi, Avleen Kaur Malhi, Kary Främling

Explainable Agent Simulations

Frontmatter
Explaining Aggregate Behaviour in Cognitive Agent Simulations Using Explanation
Abstract
We consider the problem of obtaining useful (and actionable) insight into the behaviour of agent-based simulation (using cognitive agents). When such simulations are being developed and refined, it can be useful to gain understanding of the simulation’s behaviour. In particular, such understanding often needs to be specific to a given scenario (not just high-level generic information about the simulation dynamics), and about the aggregate behaviour of multiple agents. We describe a method for taking explanations of behaviour produced by individual agents, and aggregating them to obtain useful information about the aggregate behaviour of multiple agents. The method, which has been implemented, is illustrated in the context of a traffic simulation.
Tobias Ahlbrecht, Michael Winikoff
BEN: An Agent Architecture for Explainable and Expressive Behavior in Social Simulation
Abstract
Social Simulations are used to study complex systems featuring human actors. This means reproducing real-life situations involving people in order to explain an observed behavior. However, there are actually no agent architectures among the most popular platforms for agent-based simulation enabling to easily model human actors. This situation leads modelers to implement simple reactive behaviors while the EROS principle (Enhancing Realism Of Simulation) fosters the use of psychological and social theory to improve the credibility of such agents. This paper presents the BEN architecture (Behavior with Emotions and Norms) that uses cognitive, affective and social dimensions for the behavior of social agents. This agent architecture has been implemented in the GAMA platform so it may be used by a large audience to model agents with a high level explainable behavior. This architecture is used on an evacuation case, showing how it creates believable behaviors in a real case scenario.
Mathieu Bourgais, Patrick Taillandier, Laurent Vercouter

Planning and Argumentation

Frontmatter
Temporal Multiagent Plan Execution: Explaining What Happened
Abstract
The paper addresses the problem of explaining failures that happened during the execution of Temporal Multiagent Plans (TMAPs), namely MAPs that contain both logic and temporal constraints about the action conditions and effects. We focus particularly on computing explanations that help the user figure out how failures in the execution of one or more actions propagated to later actions. To this end, we define a model that enriches knowledge about the nominal execution of the actions with knowledge about (faulty) execution modes. We present an algorithm for computing diagnoses of TMAPs execution failures, where each diagnosis identifies the actions that executed in a faulty mode, and those that failed instead because of the propagation of other failures. Diagnoses are then integrated with temporal explanations, that detail what happened during the plan execution by specifying temporal relations between the relevant events.
Gianluca Torta, Roberto Micalizio, Samuele Sormano
Explainable Argumentation for Wellness Consultation
Abstract
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers’ intuition of what constitutes a ‘good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
Isabel Sassoon, Nadin Kökciyan, Elizabeth Sklar, Simon Parsons

Explainable AI and Cognitive Science

Frontmatter
A Historical Perspective on Cognitive Science and Its Influence on XAI Research
Abstract
Cognitive science and artificial intelligence are interconnected in that developments in one field can affect the framework of reference for research in the other. Changes in our understanding of how the human mind works inadvertently changes how we go about creating artificial minds. Similarly, successes and failures in AI can inspire new directions to be taken in cognitive science. This article explores the history of the mind in cognitive science in the last 50 years, and draw comparisons as to how this has affected AI research, and how AI research in turn has affected shifts in cognitive science. In particular, we look at explainable AI (XAI) and suggest that folk psychology is of particular interest for that area of research. In cognitive science, folk psychology is divided between two theories: theory-theory and simulation theory. We argue that it is important for XAI to recognise and understand this debate, and that reducing reliance on theory-theory by incorporating more simulationist frameworks into XAI could help further the field. We propose that such incorporation would involve robots employing more embodied cognitive processes when communicating with humans, highlighting the importance of bodily action in communication and mindreading.
Marcus Westberg, Amber Zelvelder, Amro Najjar
Backmatter
Metadata
Title
Explainable, Transparent Autonomous Agents and Multi-Agent Systems
Editors
Dr. Davide Calvaresi
Amro Najjar
Prof. Michael Schumacher
Kary Främling
Copyright Year
2019
Electronic ISBN
978-3-030-30391-4
Print ISBN
978-3-030-30390-7
DOI
https://doi.org/10.1007/978-3-030-30391-4

Premium Partner