Skip to main content
Top

2021 | Book

Explainable and Transparent AI and Multi-Agent Systems

Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers

Editors: Dr. Davide Calvaresi, Amro Najjar, Prof. Michael Winikoff, Kary Främling

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the Third International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, which was held virtually due to the COVID-19 pandemic.
The 19 long revised papers and 1 short contribution were carefully selected from 32 submissions. The papers are organized in the following topical sections: XAI & machine learning; XAI vision, understanding, deployment and evaluation; XAI applications; XAI logic and argumentation; decentralized and heterogeneous XAI.

Table of Contents

Frontmatter

XAI and Machine Learning

Frontmatter
To Pay or Not to Pay Attention: Classifying and Interpreting Visual Selective Attention Frequency Features
Abstract
Selective attention is the ability to promote the processing of objects important for the accomplishment of our behavioral goals (target objects) over the objects not important to those goals (distractor objects). Previous investigations have shown that the mechanisms of selective attention contribute to enhancing perception in both simple daily tasks and more complex activities requiring learning new information.
Recently, it has been verified that selective attention to target objects and distractor objects is separable in the frequency domain, using Logistic Regression (LR) and Support Vector Machines (SVMs) classification. However, discerning dynamics of target and distractor objects in the context of selective attention has not been accomplished yet.
This paper extends the investigations on the possible classification and interpretation of distraction and intention solely relying on neural activity (frequency features). In particular, this paper (i) classifies distractor objects vs. target object replicating the LR classification of prior studies, extending the analysis by (ii) interpreting the coefficient weights relating to all features with a focus on N2PC features, and (iii) retrains an LR classifier with the features deemed important by the interpretation analysis.
As a result of the interpretation methods, we have successfully decreased the feature size to 7.3% of total features – i.e., from 19,072 to 1,386 features – while recording only a 0.04 loss in performance accuracy score—i.e., from 0.65 to 0.61. Additionally, the interpretation of the classifiers’ coefficient weights unveiled new evidence regarding frequency which has been discussed along with the paper.
Lora Fanda, Yashin Dicente Cid, Pawel J. Matusz, Davide Calvaresi
GridEx: An Algorithm for Knowledge Extraction from Black-Box Regressors
Abstract
Knowledge-extraction methods are applied to ML-based predictors to attain explainable representations of their operation when the lack of interpretable results constitutes a problem. Several algorithms have been proposed for knowledge extraction, mostly focusing on the extraction of either lists or trees of rules. Yet, most of them only support supervised learning – and, in particular, classification – tasks. Iter is among the few rule-extraction methods capable of extracting symbolic rules out of sub-symbolic regressors. However, its performance – here intended as the interpretability of the rules it extracts – easily degrades as the complexity of the regression task at hand increases.
In this paper we propose GridEx, an extension of the Iter algorithm, aimed at extracting symbolic knowledge – in the form of lists of if-then-else rules – from any sort of sub-symbolic regressor—there including neural networks of arbitrary depth. With respect to Iter, GridEx produces shorter rule lists retaining higher fidelity w.r.t. the original regressor. We report several experiments assessing GridEx performance against Iter and Cart (i.e., decision-tree regressors) used as benchmarks.
Federico Sabbatini, Giovanni Ciatto, Andrea Omicini
Comparison of Contextual Importance and Utility with LIME and Shapley Values
Abstract
Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
Kary Främling, Marcus Westberg, Martin Jullum, Manik Madhikermi, Avleen Malhi
ciu.image: An R Package for Explaining Image Classification with Contextual Importance and Utility
Abstract
Many techniques have been proposed in recent years that attempt to explain results of image classifiers, notably for the case when the classifier is a deep neural network. This paper presents an implementation of the Contextual Importance and Utility method for explaining image classifications. It is an R package that can be used with the most usual image classification models. The paper shows results for typical benchmark images, as well as for a medical data set of gastro-enterological images. For comparison, results produced by the LIME method are included. Results show that CIU produces similar or better results than LIME with significantly shorter calculation times. However, the main purpose of this paper is to bring the existence of this package to general knowledge and use, rather than comparing with other explanation methods.
Kary Främling, Samanta Knapic̆, Avleen Malhi
Shallow2Deep: Restraining Neural Networks Opacity Through Neural Architecture Search
Abstract
Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI. We propose Shallow2Deep, an evolutionary NAS algorithm that exploits local variability to restrain opacity of DL-systems through NN architectures simplification. Shallow2Deep effectively reduces NN complexity – therefore their opacity – while reaching state-of-the-art performances. Unlike its competitors, Shallow2Deep promotes variability of localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.
Andrea Agiollo, Giovanni Ciatto, Andrea Omicini
Visual Explanations for DNNs with Contextual Importance
Abstract
Autonomous agents and robots with vision capabilities powered by machine learning algorithms such as Deep Neural Networks (DNNs) are taking place in many industrial environments. While DNNs have improved the accuracy in many prediction tasks, it is shown that even modest disturbances in their input produce erroneous results. Such errors have to be detected and dealt with for making the deployment of DNNs secure in real-world applications. Several explanation methods have been proposed to understand the inner workings of these models. In this paper, we present how Contextual Importance (CI) can make DNN results more explainable in an image classification task without peeking inside the network. We produce explanations for individual classifications by perturbing an input image through over-segmentation and evaluating the effect on a prediction score. Then the output highlights the most contributing segments for a prediction. Results are compared with two explanation methods, namely mask perturbation and LIME. The results for the MNIST hand-written digit dataset produced by the three methods show that CI provides better visual explainability.
Sule Anjomshoae, Lili Jiang, Kary Främling
Towards Explainable Recommendations of Resource Allocation Mechanisms in On-Demand Transport Fleets
Abstract
Multi-agent systems can be considered a natural paradigm when modeling various transportation systems, whose management involves solving hard, dynamic, and distributed allocation problems. Such problems have been studied for decades, and various solutions have been proposed. However, even the most straightforward resource allocation mechanisms lead to debates on efficiency vs. fairness, business quality vs. passenger’s user experience, or performance vs. robustness. We aim to design an analytical tool that functions as a recommendation system for on-demand transport (ODT) authorities. This tool recommends specific allocation mechanisms that match the authority’s objectives and preferences to solve allocation problems for particular contextual scenarios. The paper emphasizes the need for transparency and explainability of resource allocation decisions in ODT systems to be understandable by humans and move toward a more controllable resource allocation. We propose in this preliminary work a multi-agent architecture and general implementation guidelines towards meeting these requirements.
Alaa Daoud, Hiba Alqasir, Yazan Mualla, Amro Najjar, Gauthier Picard, Flavien Balbo

XAI Vision, Understanding, Deployment and Evaluation

Frontmatter
A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Understandable
Abstract
Because of recent and rapid developments in Artificial Intelligence (AI), humans and AI-systems increasingly work together in human-agent teams. However, in order to effectively leverage the capabilities of both, AI-systems need to be understandable to their human teammates. The branch of eXplainable AI (XAI) aspires to make AI-systems more understandable to humans, potentially improving human-agent teamwork. Unfortunately, XAI literature suffers from a lack of agreement regarding the definitions of and relations between the four key XAI-concepts: transparency, interpretability, explainability, and understandability. Inspired by both XAI and social sciences literature, we present a two-dimensional framework that defines and relates these concepts in a concise and coherent way, yielding a classification of three types of AI-systems: incomprehensible, interpretable, and understandable. We also discuss how the established relationships can be used to guide future research into XAI, and how the framework could be used during the development of AI-systems as part of human-AI teams.
Ruben S. Verhagen, Mark A. Neerincx, Myrthe L. Tielman
Towards Explainable Visionary Agents: License to Dare and Imagine
Abstract
Since their appearance, computer programs have embodied discipline and structured approaches and methodologies. Yet, to this day, equipping machines with imaginative and creative capabilities remains one of the most challenging and fascinating goals we pursue. Intelligent software agents can behave intelligently in well-defined scenarios, relying on Machine Learning (ML), symbolic reasoning, and the ability of their developers for tailoring smart behaviors to specific application domains. However, to forecast the evolution of all possible scenarios is unfeasible. Thus, intelligent agents should autonomously/creatively adapt to the world’s mutability. This paper investigates the meaning of imagination in the context of cognitive agents. In particular, it addresses techniques and approaches to let agents autonomously imagine/simulate their course of action and generate explanations supporting it, and formalizes thematic challenges. Accordingly, we investigate research areas including: (i) reasoning and automatic theorem proving to synthesize novel knowledge via inference; (ii) automatic planning and simulation, used to speculate over alternative courses of action; (iii) machine learning and data mining, exploited to induce new knowledge from experience; and (iv) biochemical coordination, which keeps imagination dynamic by continuously reorganizing it.
Giovanni Ciatto, Amro Najjar, Jean-Paul Calbimonte, Davide Calvaresi
Towards an XAI-Assisted Third-Party Evaluation of AI Systems: Illustration on Decision Trees
Abstract
We explored the potential contribution of eXplainable Artificial Intelligence (XAI) for the evaluation of Artificial Intelligence (AI), in a context where such an evaluation is performed by independent third-party evaluators, for example in the objective of certification. The experimental approach of this paper is based on “explainable by design” decision trees that produce predictions on health data and bank data. Results presented in this paper show that the explanations could be used by the evaluators to identify the parameters used in decision making and their levels of importance. The explanations would thus make it possible to orient the constitution of the evaluation corpus, to explore the rules followed for decision-making and to identify potentially critical relationships between different parameters. In addition, the explanations make it possible to inspect the presence of bias in the database and in the algorithm. These first results lay the groundwork for further additional research in order to generalize the conclusions of this paper to different XAI methods.
Yongxin Zhou, Matthieu Boussard, Agnes Delaborde
What Does It Cost to Deploy an XAI System: A Case Study in Legacy Systems
Abstract
Enterprise Resource Planning (ERP) software is used by businesses and extended via customisation. Automated custom code analysis and migration is a critical issue at ERP release upgrade times. Despite research advances, automated code analysis and transformation require a huge amount of manual work related to parser adaptation, rule extension and post-processing. These operations become unmanageable if the frequency of updates increases from yearly to monthly intervals. This article describes how the process of custom code analysis to custom code transformation can be automated in an explainable way. We develop an aggregate taxonomy for explainability and analyse the requirements based on roles. We explain in which steps on the new code migration process machine learning is used. Further, we analyse additional effort needed to make the new way of code migration explainable to different stakeholders.
Sviatlana Höhn, Niko Faradouris

XAI Applications

Frontmatter
Explainable AI (XAI) Models Applied to the Multi-agent Environment of Financial Markets
Abstract
Financial markets are a real life multi-agent system that is well known to be hard to explain and interpret. We consider a gradient boosting decision trees (GBDT) approach to predict large S&P 500 price drops from a set of 150 technical, fundamental and macroeconomic features. We report an improved accuracy of GBDT over other machine learning (ML) methods on the S&P 500 futures prices. We show that retaining fewer and carefully selected features provides improvements across all ML approaches. Shapley values have recently been introduced from game theory to the field of ML. They allow for a robust identification of the most important variables predicting stock market crises, and of a local explanation of the crisis probability at each date, through a consistent features attribution. We apply this methodology to analyse in detail the March 2020 financial meltdown, for which the model offered a timely out of sample prediction. This analysis unveils in particular the contrarian predictive role of the tech equity sector before and after the crash.
Jean Jacques Ohana, Steve Ohana, Eric Benhamou, David Saltiel, Beatrice Guez
Toward XAI & Human Synergies to Explain the History of Art: The Smart Photobooth Project
Abstract
The advent of Artificial Intelligence (AI) has brought about significant changes in our daily lives with applications including industry, smart cities, agriculture, and telemedicine. Despite the successes of AI in other “less-technical” domains, human-AI synergies are required to ensure user engagement and provide interactive expert knowledge. This is notably the case of applications related to art since the appreciation and the comprehension of art is considered to be an exclusively human capacity. This paper discusses the potential human-AI synergies aiming at explaining the history of art and artistic style transfer. This work is done in the context of the “Smart Photobooth” a project which runs within the AI & Art pavilion. The latter is a satellite event of Esch2022 European Capital of Culture whose main aim is to reflect on AI and the future of art. The project is mainly an outreach and knowledge dissemination project, it uses a smart photo-booth, capable of automatically transforming the user’s picture into a well-known artistic style (e.g., impressionism), as an interactive approach to introduce the principles of the history of art to the open public and provide them with a simple explanation of different art painting styles. Whereas some of the cutting-edge AI algorithms can provide insights on what constitutes an artistic style on the visual level, the information provided by human experts is essential to explain the historical and political context in which the style emerged. To bridge this gap, this paper explores Human-AI synergies in which the explanation generated by the eXplainable AI (XAI) mechanism is coupled with insights from the human expert to provide explanations for school students as well as a wider audience. Open issues and challenges are also identified and discussed.
Egberdien van der Peijl, Amro Najjar, Yazan Mualla, Thiago Jorge Bourscheid, Yolanda Spinola-Elias, Daniel Karpati, Sana Nouzri
Assessing Explainability in Reinforcement Learning
Abstract
Reinforcement Learning performs well in many different application domains and is starting to receive greater authority and trust from its users. But most people are unfamiliar with how AIs make their decisions and many of them feel anxious about AI decision-making. A result of this is that AI methods suffer from trust issues and this hinders the full-scale adoption of them. In this paper we determine what the main application domains of Reinforcement Learning are, and to what extent research in those domains has explored explainability. This paper reviews examples of the most active application domains for Reinforcement Learning and suggest some guidelines to assess the importance of explainability for these applications. We present some key factors that should be included in evaluating these applications and show how these work with the examples found. By using these assessment criteria to evaluate the explainability needs for Reinforcement Learning, the research field can be guided to increasing transparency and trust through explanations.
Amber E. Zelvelder, Marcus Westberg, Kary Främling

XAI Logic and Argumentation

Frontmatter
Schedule Explainer: An Argumentation-Supported Tool for Interactive Explanations in Makespan Scheduling
Abstract
Scheduling is a fundamental optimisation problem that has a wide range of practical applications. Mathematical formulations of scheduling problems allow for development of efficient solvers. Yet, the same mathematical intricacies often make solvers black-boxes: their outcomes are hardly explainable and interactive even to experts, let alone lay users. Still, in real-world applications as well as research environments, lay users and experts likewise require a means to understand why a schedule is reasonable and what would happen with different schedules. Building upon a recently proposed approach to argumentation-supported explainable scheduling, we present a tool, Schedule Explainer, that provides interactive explanations in makespan scheduling easily and with clarity.
Kristijonas Čyras, Myles Lee, Dimitrios Letsios
Towards Explainable Practical Agency
A Logical Perspective
Abstract
Practical reasoning is such an essential cornerstone of artificial intelligence that it is impossible to see how autonomous agents can be realized without it. As a first step of practical reasoning, an autonomous agent is required to form its intentions by choosing amongst its motivations in light of its beliefs. An autonomous agent is also expected to seamlessly revise its intentions whenever its beliefs or motivations change. In the modern world, it becomes an impelling priority to endow agents with explainable practical reasoning capabilities in order to foster the trustworthiness of artificial agents. An adequate framework of practical reasoning must be able to (i) capture the process of intention formation, (ii) model the joint revision of beliefs and intentions, and (iii) provide explanations for the chosen beliefs and intentions. Despite the abundance of approaches in the literature for modelling practical reasoning, such approaches fail to possess at least one of the previously mentioned capabilities. In this paper, we present formal algebraic semantics for a logical language that can be used for practical reasoning. We demonstrate how our language possesses all of the aforementioned capabilities providing an adequate framework for explainable practical reasoning.
Nourhan Ehab, Haythem O. Ismail
Explainable Reasoning in Face of Contradictions: From Humans to Machines
Abstract
A well-studied trait of human reasoning and decision-making is the ability to not only make decisions in the presence of contradictions, but also to explain why a decision was made, in particular if a decision deviates from what is expected by an inquirer who requests the explanation. In this paper, we examine this phenomenon, which has been extensively explored by behavioral economics research, from the perspective of symbolic artificial intelligence. In particular, we introduce four levels of intelligent reasoning in face of contradictions, which we motivate from a microeconomics and behavioral economics perspective. We relate these principles to symbolic reasoning approaches, using abstract argumentation as an exemplary method. This allows us to ground the four levels in a body of related previous and ongoing research, which we use as a point of departure for outlining future research directions.
Timotheus Kampik, Dov Gabbay
Towards Transparent Legal Formalization
Abstract
A key challenge in making a transparent formalization of a legal text is the dependency on two domain experts. While a legal expert is needed in order to interpret the legal text, a logician or a programmer is needed for encoding it into a program or a formula. Various existing methods are trying to solve this challenge by improving or automating the communication between the two experts. In this paper, we follow a different direction and attempt to eliminate the dependency on the target domain expert. This is achieved by inverting the translation back into the original text. By skipping over the logical translation, a legal expert can now both interpret and evaluate a translation.
Tomer Libal, Tereza Novotná
Applying Abstract Argumentation to Normal-Form Games
Abstract
Game theory is the most common approach to studying strategic interactions between agents, but it provides little explanation for game-theoretical solution concepts. In this paper, we use a game-based argumentation framework to solve normal-form games. The result is that solution concepts in game theory can be interpreted by extensions of a game-based argumentation framework. We can use our framework to solve normal-form games, providing explanation for solution concepts.
You Cheng, Beishui Liao, Jieting Luo

Decentralized and Heterogeneous XAI

Frontmatter
Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge
Abstract
Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing.
The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual).
To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.
Davide Calvaresi, Giovanni Ciatto, Amro Najjar, Reyhan Aydoğan, Leon Van der Torre, Andrea Omicini, Michael Schumacher
Backmatter
Metadata
Title
Explainable and Transparent AI and Multi-Agent Systems
Editors
Dr. Davide Calvaresi
Amro Najjar
Prof. Michael Winikoff
Kary Främling
Copyright Year
2021
Electronic ISBN
978-3-030-82017-6
Print ISBN
978-3-030-82016-9
DOI
https://doi.org/10.1007/978-3-030-82017-6

Premium Partner