Skip to main content

2019 | OriginalPaper | Buchkapitel

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

verfasst von : Sule Anjomshoae, Kary Främling, Amro Najjar

Erschienen in: Explainable, Transparent Autonomous Agents and Multi-Agent Systems

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI) (2017) Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI) (2017)
2.
Zurück zum Zitat Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (2019) Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (2019)
3.
Zurück zum Zitat Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017) Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:​1708.​08296 (2017)
4.
6.
Zurück zum Zitat Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547 (2017) Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:​1712.​00547 (2017)
7.
Zurück zum Zitat Främling, K.: Explaining results of neural networks by contextual importance and utility. In: Proceedings of the AISB 1996 Conference (1996) Främling, K.: Explaining results of neural networks by contextual importance and utility. In: Proceedings of the AISB 1996 Conference (1996)
8.
Zurück zum Zitat Miller, T.: Explanation in artificial intelligence: insights from the social sciences. In: Artificial Intelligence (2018) Miller, T.: Explanation in artificial intelligence: insights from the social sciences. In: Artificial Intelligence (2018)
9.
Zurück zum Zitat Molnar, C.J.: Interpretable Machine Learning. A Guide For Making Black Box Models Explainable. Leanpub (2018) Molnar, C.J.: Interpretable Machine Learning. A Guide For Making Black Box Models Explainable. Leanpub (2018)
10.
Zurück zum Zitat Shortliffe, E.: Computer-Based Medical Consultations: MYCIN, vol. 2. Elsevier (2012) Shortliffe, E.: Computer-Based Medical Consultations: MYCIN, vol. 2. Elsevier (2012)
11.
Zurück zum Zitat Clancey, W.J.: The epistemology of a rule-based expert system—a framework for explanation. Artif. Intell. 20(3), 215–251 (1983)CrossRef Clancey, W.J.: The epistemology of a rule-based expert system—a framework for explanation. Artif. Intell. 20(3), 215–251 (1983)CrossRef
12.
Zurück zum Zitat Swartout, W.R.: XPLAIN: a system for creating and explaining expert consulting programs. In: University of Southern California Marina Del Rey Information Sciences Institue (1983) Swartout, W.R.: XPLAIN: a system for creating and explaining expert consulting programs. In: University of Southern California Marina Del Rey Information Sciences Institue (1983)
13.
Zurück zum Zitat Swartout, W., Paris, C., Moore, J.: Explanations in knowledge systems: design for explainable expert systems. IEEE Expert 6(3), 58–64 (1991)CrossRef Swartout, W., Paris, C., Moore, J.: Explanations in knowledge systems: design for explainable expert systems. IEEE Expert 6(3), 58–64 (1991)CrossRef
14.
Zurück zum Zitat Langlotz, C., Shortliffe, E.H., Fagan, L.M.: Using decision theory to justify heuristics. In: AAAI (1986) Langlotz, C., Shortliffe, E.H., Fagan, L.M.: Using decision theory to justify heuristics. In: AAAI (1986)
15.
Zurück zum Zitat Klein, D.A.: Decision-Analytic Intelligent Systems: Automated Explanation and Knowledge Acquisition. Routledge, Abingdon (2013)CrossRef Klein, D.A.: Decision-Analytic Intelligent Systems: Automated Explanation and Knowledge Acquisition. Routledge, Abingdon (2013)CrossRef
16.
Zurück zum Zitat Forsythe, D.E.: Using ethnography in the design of an explanation system. Expert Syst. Appl. 8(4), 403–417 (1995)CrossRef Forsythe, D.E.: Using ethnography in the design of an explanation system. Expert Syst. Appl. 8(4), 403–417 (1995)CrossRef
17.
Zurück zum Zitat Henrion, M., Druzdzel, M.J.: Qualtitative propagation and scenario-based scheme for exploiting probabilistic reasoning. In: Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence. Elsevier Science Inc. (1990) Henrion, M., Druzdzel, M.J.: Qualtitative propagation and scenario-based scheme for exploiting probabilistic reasoning. In: Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence. Elsevier Science Inc. (1990)
18.
Zurück zum Zitat Lacave, C., Díez, F.J.: A review of explanation methods for Bayesian networks. Knowl. Eng. Rev. 17(2), 107–127 (2002)CrossRef Lacave, C., Díez, F.J.: A review of explanation methods for Bayesian networks. Knowl. Eng. Rev. 17(2), 107–127 (2002)CrossRef
19.
Zurück zum Zitat Souillard-Mandar, W., Davis, R., Rudin, C., Au, R., Penney, D.: Interpretable machine learning models for the digital clock drawing test. arXiv preprint arXiv:1606.07163 (2016) Souillard-Mandar, W., Davis, R., Rudin, C., Au, R., Penney, D.: Interpretable machine learning models for the digital clock drawing test. arXiv preprint arXiv:​1606.​07163 (2016)
20.
Zurück zum Zitat Ustun, B., Rudin, C.: Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102(3), 349–391 (2016)MathSciNetCrossRef Ustun, B., Rudin, C.: Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102(3), 349–391 (2016)MathSciNetCrossRef
21.
Zurück zum Zitat Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016) Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
22.
Zurück zum Zitat Rudin, C., Letham, B., Madigan, D.: Learning theory analysis for association rules and sequential event prediction. J. Mach. Learn. Res. 14(1), 3441–3492 (2013)MathSciNetMATH Rudin, C., Letham, B., Madigan, D.: Learning theory analysis for association rules and sequential event prediction. J. Mach. Learn. Res. 14(1), 3441–3492 (2013)MathSciNetMATH
23.
Zurück zum Zitat Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Building interpretable classifiers with rules using Bayesian analysis. In: Department of Statistics Technical Report tr609, University of Washington (2012) Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Building interpretable classifiers with rules using Bayesian analysis. In: Department of Statistics Technical Report tr609, University of Washington (2012)
24.
Zurück zum Zitat Kim, B., Rudin, C., Shah, J.A.: The Bayesian case model: a generative approach for case-based reasoning and prototype classification. In: Advances in Neural Information Processing Systems (2014) Kim, B., Rudin, C., Shah, J.A.: The Bayesian case model: a generative approach for case-based reasoning and prototype classification. In: Advances in Neural Information Processing Systems (2014)
25.
Zurück zum Zitat Kim, B., Shah, J.A., Doshi-Velez, F.: Mind the gap: a generative approach to interpretable feature selection and extraction. In: Advances in Neural Information Processing Systems (2015) Kim, B., Shah, J.A., Doshi-Velez, F.: Mind the gap: a generative approach to interpretable feature selection and extraction. In: Advances in Neural Information Processing Systems (2015)
26.
Zurück zum Zitat Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)MathSciNetMATH Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)MathSciNetMATH
27.
Zurück zum Zitat Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016) Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016)
28.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016. pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016. pp. 1135–1144 (2016)
29.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:​1606.​05386 (2016)
30.
Zurück zum Zitat Främling, K.: Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère. In: Institut National de Sciences Appliquées de Lyon, Ecole Nationale Supérieure des Mines de Saint-Etienne, France, p. 209 (1996) Främling, K.: Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère. In: Institut National de Sciences Appliquées de Lyon, Ecole Nationale Supérieure des Mines de Saint-Etienne, France, p. 209 (1996)
31.
Zurück zum Zitat Främling, K., Graillot, D.: Extracting explanations from neural networks. In: Proceedings of the ICANN (1995) Främling, K., Graillot, D.: Extracting explanations from neural networks. In: Proceedings of the ICANN (1995)
32.
Zurück zum Zitat Saaty, T.L.: Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. RWS Publications, Pittsburgh (1990) Saaty, T.L.: Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. RWS Publications, Pittsburgh (1990)
33.
Zurück zum Zitat Roy, B.: Classement et choix en présence de points de vue multiples. 2(8), 57–75 (1968)CrossRef Roy, B.: Classement et choix en présence de points de vue multiples. 2(8), 57–75 (1968)CrossRef
34.
Zurück zum Zitat Setiono, R., Azcarraga, A., Hayashi, Y.: MofN rule extraction from neural networks trained with augmented discretized input. In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp. 1079–1086 (2014) Setiono, R., Azcarraga, A., Hayashi, Y.: MofN rule extraction from neural networks trained with augmented discretized input. In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp. 1079–1086 (2014)
35.
Zurück zum Zitat Seo, D., Oh, K., Oh, I.S.: Regional multi-scale approach for visually pleasing explanations of deep neural networks. arXiv preprint arXiv:1807.11720 (2018) Seo, D., Oh, K., Oh, I.S.: Regional multi-scale approach for visually pleasing explanations of deep neural networks. arXiv preprint arXiv:​1807.​11720 (2018)
37.
Zurück zum Zitat Berg, T., Belhumeur, P.N.: How do you tell a blackbird from a crow? In: IEEE International Conference on Computer Vision, pp. 9–16 (2013) Berg, T., Belhumeur, P.N.: How do you tell a blackbird from a crow? In: IEEE International Conference on Computer Vision, pp. 9–16 (2013)
38.
Zurück zum Zitat Huk Park, D., et al.: Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018) Huk Park, D., et al.: Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Metadaten
Titel
Explanations of Black-Box Model Predictions by Contextual Importance and Utility
verfasst von
Sule Anjomshoae
Kary Främling
Amro Najjar
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-30391-4_6

Premium Partner