Skip to main content
Top

2021 | OriginalPaper | Chapter

Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations

Authors : Bettina Finzel, David E. Tafler, Stephan Scheele, Ute Schmid

Published in: KI 2021: Advances in Artificial Intelligence

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as well as end users. Thus, in order to satisfy different stakeholders, explanation methods need to be combined. While multi-modal explanations have been used to make model predictions more transparent, less research has focused on treating explanation as a process, where users can ask for information according to the level of understanding gained at a certain point in time. Consequently, an opportunity to explore explanations on different levels of abstraction should be provided besides multi-modal explanations. We present a process-based approach that combines multi-level and multi-modal explanations. The user can ask for textual explanations or visualizations through conversational interaction in a drill-down manner. We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model. Further, we present an algorithm that creates an explanatory tree for each example for which a classifier decision is to be explained. The explanatory tree can be navigated by the user to get answers of different levels of detail. We provide a proof-of-concept implementation for concepts induced from a semantic net about living beings.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
2
Retrieval through semantic search can be performed for example over semantic mediawiki: https://​www.​semantic-mediawiki.​org/​wiki/​Help:​Semantic_​search.
 
Literature
1.
go back to reference Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)CrossRef Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)CrossRef
2.
go back to reference Baniecki, H., Biecek, P.: The grammar of interactive explanatory model analysis. CoRR abs/2005.00497 (2020) Baniecki, H., Biecek, P.: The grammar of interactive explanatory model analysis. CoRR abs/2005.00497 (2020)
3.
go back to reference Bratko, I.: Prolog Programming for Artificial Intelligence. Addison-Wesley Longman Publishing Co., Inc., Boston (1986)MATH Bratko, I.: Prolog Programming for Artificial Intelligence. Addison-Wesley Longman Publishing Co., Inc., Boston (1986)MATH
4.
go back to reference Bruckert, S., Finzel, B., Schmid, U.: The next generation of medical decision support: a roadmap toward transparent expert companions. Front. Artif. Intell. 3, 75 (2020)CrossRef Bruckert, S., Finzel, B., Schmid, U.: The next generation of medical decision support: a roadmap toward transparent expert companions. Front. Artif. Intell. 3, 75 (2020)CrossRef
5.
go back to reference Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intelligenza Artificiale 14(1), 7–32 (2020)CrossRef Calegari, R., Ciatto, G., Omicini, A.: On the integration of symbolic and sub-symbolic techniques for XAI: a survey. Intelligenza Artificiale 14(1), 7–32 (2020)CrossRef
8.
go back to reference El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM CHI Conference Workshop on Human-Centered Machine Learning Perspectives at CHI’19, p. 13 (2019) El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM CHI Conference Workshop on Human-Centered Machine Learning Perspectives at CHI’19, p. 13 (2019)
10.
go back to reference Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019) Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
11.
go back to reference Hartley, R.T., Barnden, J.A.: Semantic networks: visualizations of knowledge. Trends Cogn. Sci. 1(5), 169–175 (1997)CrossRef Hartley, R.T., Barnden, J.A.: Semantic networks: visualizations of knowledge. Trends Cogn. Sci. 1(5), 169–175 (1997)CrossRef
13.
go back to reference Hilton, D.J.: A conversational model of causal explanation. Eur. Rev. Soc. Psychol. 2(1), 51–81 (1991)CrossRef Hilton, D.J.: A conversational model of causal explanation. Eur. Rev. Soc. Psychol. 2(1), 51–81 (1991)CrossRef
14.
go back to reference Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), e1312 (2019) Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), e1312 (2019)
15.
go back to reference Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fus. 71, 28–37 (2021)CrossRef Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fus. 71, 28–37 (2021)CrossRef
16.
go back to reference Kulesza, T., et al.: Explanatory debugging: Supporting end-user debugging of machine-learned programs. In: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 41–48. IEEE (2010) Kulesza, T., et al.: Explanatory debugging: Supporting end-user debugging of machine-learned programs. In: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 41–48. IEEE (2010)
17.
go back to reference Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)? - a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)MathSciNetCrossRef Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)? - a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)MathSciNetCrossRef
18.
go back to reference Liebig, T., Scheele, S.: Explaining entailments and patching modelling flaws. Künstliche Intell. 22(2), 25–27 (2008) Liebig, T., Scheele, S.: Explaining entailments and patching modelling flaws. Künstliche Intell. 22(2), 25–27 (2008)
19.
go back to reference Lombrozo, T.: Simplicity and probability in causal explanation. Cogn. Psychol. 55(3), 232–257 (2007)CrossRef Lombrozo, T.: Simplicity and probability in causal explanation. Cogn. Psychol. 55(3), 232–257 (2007)CrossRef
20.
go back to reference Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef
21.
go back to reference Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)MATH Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)MATH
22.
23.
go back to reference Muggleton, S., de Raedt, L.: Inductive logic programming: theory and methods. J. Logic Program. 19–20, 629–679 (1994)MathSciNetCrossRef Muggleton, S., de Raedt, L.: Inductive logic programming: theory and methods. J. Logic Program. 19–20, 629–679 (1994)MathSciNetCrossRef
25.
go back to reference Musto, C., Narducci, F., Lops, P., De Gemmis, M., Semeraro, G.: ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 151–154. ACM, Boston (2016) Musto, C., Narducci, F., Lops, P., De Gemmis, M., Semeraro, G.: ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 151–154. ACM, Boston (2016)
26.
go back to reference Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS), Los Angeles p. 7 (2019) Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS), Los Angeles p. 7 (2019)
28.
go back to reference Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP – empowering LIME with aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) ILP 2018. LNCS (LNAI), vol. 11105, pp. 105–117. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99960-9_7CrossRef Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP – empowering LIME with aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) ILP 2018. LNCS (LNAI), vol. 11105, pp. 105–117. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-319-99960-9_​7CrossRef
29.
go back to reference Roth-Berghofer, T., Forcher, B.: Improving understandability of semantic search explanations. Int. J. Knowl. Eng. Data Min. 1(3), 216–234 (2011)CrossRef Roth-Berghofer, T., Forcher, B.: Improving understandability of semantic search explanations. Int. J. Knowl. Eng. Data Min. 1(3), 216–234 (2011)CrossRef
30.
go back to reference Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)CrossRef Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)CrossRef
31.
go back to reference Schmid, U., Finzel, B.: Mutual explanations for cooperative decision making in medicine. KI-Künstliche Intelligenz, pp. 1–7 (2020) Schmid, U., Finzel, B.: Mutual explanations for cooperative decision making in medicine. KI-Künstliche Intelligenz, pp. 1–7 (2020)
33.
go back to reference Sperrle, F., El-Assady, M., Guo, G., Chau, D.H., Endert, A., Keim, D.: Should we trust (X)AI? Design dimensions for structured experimental evaluations. arXiv:2009.06433 [cs] (2020) Sperrle, F., El-Assady, M., Guo, G., Chau, D.H., Endert, A., Keim, D.: Should we trust (X)AI? Design dimensions for structured experimental evaluations. arXiv:​2009.​06433 [cs] (2020)
35.
go back to reference Sterling, L., Shapiro, E.: The Art of Prolog: Advanced Programming Techniques. MIT Press, Cambridge (1986)MATH Sterling, L., Shapiro, E.: The Art of Prolog: Advanced Programming Techniques. MIT Press, Cambridge (1986)MATH
36.
go back to reference Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 239–245 (2019) Teso, S., Kersting, K.: Explanatory interactive machine learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 239–245 (2019)
37.
go back to reference Thaler, A., Schmid, U.: Explaining machine learned relational concepts in visual domains-effects of perceived accuracy on joint performance and trust. In: Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci’21, Vienna). Cognitive Science Society (to appear) Thaler, A., Schmid, U.: Explaining machine learned relational concepts in visual domains-effects of perceived accuracy on joint performance and trust. In: Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci’21, Vienna). Cognitive Science Society (to appear)
39.
go back to reference Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!’’ exploring the potential of virtual agents in explainable AI interaction design. J. Multimod. User Interfaces 15, 87–98 (2020)CrossRef Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!’’ exploring the potential of virtual agents in explainable AI interaction design. J. Multimod. User Interfaces 15, 87–98 (2020)CrossRef
Metadata
Title
Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations
Authors
Bettina Finzel
David E. Tafler
Stephan Scheele
Ute Schmid
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-87626-5_7

Premium Partner