Skip to main content
Top

2021 | OriginalPaper | Chapter

Towards Model-Agnostic Ensemble Explanations

Authors : Szymon Bobek, Paweł Bałaga, Grzegorz J. Nalepa

Published in: Computational Science – ICCS 2021

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Explainable Artificial Intelligence (XAI) methods form a large portfolio of different frameworks and algorithms. Although the main goal of all of explanation methods is to provide an insight into the decision process of AI system, their underlying mechanisms may differ. This may result in very different explanations for the same tasks. In this work, we present an approach that aims at combining several XAI algorithms into one ensemble explanation mechanism via quantitative, automated evaluation framework. We focus on model-agnostic explainers to provide most robustness and we demonstrate our approach on image classification task.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
2.
go back to reference Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018) Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018)
3.
go back to reference Collaris, D., van Wijk, J.J.: Explainexplore: visual exploration of machine learning explanations. In: 2020 IEEE Pacific Visualization Symposium (PacificVis), pp. 26–35 (2020) Collaris, D., van Wijk, J.J.: Explainexplore: visual exploration of machine learning explanations. In: 2020 IEEE Pacific Visualization Symposium (PacificVis), pp. 26–35 (2020)
4.
go back to reference Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv preprint arXiv:1606.08813 (2016) Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv preprint arXiv:​1606.​08813 (2016)
7.
go back to reference Lundberg, S.M., et al.: Explainable AI for trees: from local explanations to global understanding. CoRR, abs/1905.04610 (2019) Lundberg, S.M., et al.: Explainable AI for trees: from local explanations to global understanding. CoRR, abs/1905.04610 (2019)
8.
go back to reference Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc. (2017) Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc. (2017)
9.
go back to reference Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems (2020) Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems (2020)
10.
go back to reference Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., Friedrich, T.: timeXplain - a framework for explaining the predictions of time series classifiers (2020) Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., Friedrich, T.: timeXplain - a framework for explaining the predictions of time series classifiers (2020)
11.
go back to reference Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10764–10773 (2019) Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10764–10773 (2019)
12.
go back to reference Raji, I.D., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. CoRR, abs/2001.00973 (2020) Raji, I.D., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. CoRR, abs/2001.00973 (2020)
13.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016)
14.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Publications (2018) Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Publications (2018)
16.
go back to reference Schank, R.C.: Explanation: a first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986) Schank, R.C.: Explanation: a first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986)
17.
go back to reference Sokol, K., Flach, P.: One explanation does not fit all. KI - Künstliche Intelligenz 34(2), 235–250 (2020)CrossRef Sokol, K., Flach, P.: One explanation does not fit all. KI - Künstliche Intelligenz 34(2), 235–250 (2020)CrossRef
18.
go back to reference Sokol, K., Flach, P.A.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. CoRR, abs/1912.05100 (2019) Sokol, K., Flach, P.A.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. CoRR, abs/1912.05100 (2019)
19.
go back to reference Sokol, K., Santos-Rodríguez, R., Flach, P.A.: FAT forensics: a python toolbox for algorithmic fairness, accountability and transparency. CoRR, abs/1909.05167 (2019) Sokol, K., Santos-Rodríguez, R., Flach, P.A.: FAT forensics: a python toolbox for algorithmic fairness, accountability and transparency. CoRR, abs/1909.05167 (2019)
20.
go back to reference Vemulapalli, R., Agarwala, A.: A compact embedding for facial expression similarity. CoRR, abs/1811.11283 (2018) Vemulapalli, R., Agarwala, A.: A compact embedding for facial expression similarity. CoRR, abs/1811.11283 (2018)
21.
go back to reference Verma, M., Ganguly, D.: LIRME: locally interpretable ranking model explanation. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, pp. 1281–1284. Association for Computing Machinery, New York (2019) Verma, M., Ganguly, D.: LIRME: locally interpretable ranking model explanation. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, pp. 1281–1284. Association for Computing Machinery, New York (2019)
22.
go back to reference Yeh, C.-K., Hsieh, C.-Y., Suggala, A.S., Inouye, D.I., Ravikumar, P.: On the (in)fidelity and sensitivity for explanations (2019) Yeh, C.-K., Hsieh, C.-Y., Suggala, A.S., Inouye, D.I., Ravikumar, P.: On the (in)fidelity and sensitivity for explanations (2019)
23.
go back to reference Zhang, Z., Yang, F., Wang, H., Hu, X.: Contextual local explanation for black box classifiers. CoRR, abs/1910.00768 (2019) Zhang, Z., Yang, F., Wang, H., Hu, X.: Contextual local explanation for black box classifiers. CoRR, abs/1910.00768 (2019)
Metadata
Title
Towards Model-Agnostic Ensemble Explanations
Authors
Szymon Bobek
Paweł Bałaga
Grzegorz J. Nalepa
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-77970-2_4

Premium Partner