Skip to main content

2021 | OriginalPaper | Buchkapitel

Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective

verfasst von : Eoin M. Kenny, Eoin D. Delaney, Derek Greene, Mark T. Keane

Erschienen in: Pattern Recognition. ICPR International Workshops and Challenges

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper profiles the recent research work on eXplainable AI (XAI), at the Insight Centre for Data Analytics. This work concentrates on post-hoc explanation-by-example solutions to XAI as one approach to explaining black box deep-learning systems. Three different methods of post-hoc explanation are outlined for image and time-series datasets: that is, factual, counterfactual, and semi-factual methods). The future landscape for XAI solutions is discussed.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Here, we consider factual examples as explanations; but LIME [36] gives factual information about the current test instance via feature importance scores also.
 
Literatur
2.
3.
Zurück zum Zitat Bagnall, A., et al.: The great time series classification bake off: an experimental evaluation of recently proposed algorithms. Extended Version. arXiv:1602.01711 (2016) Bagnall, A., et al.: The great time series classification bake off: an experimental evaluation of recently proposed algorithms. Extended Version. arXiv:​1602.​01711 (2016)
4.
Zurück zum Zitat Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019) Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
5.
Zurück zum Zitat Chen, C., et al.: This looks like that. In: NeurIPS (2020) Chen, C., et al.: This looks like that. In: NeurIPS (2020)
8.
Zurück zum Zitat Ford, C., et al.: Play MNIST for me! User studies on the effects of post-hoc, example-based explanations & error rates on debugging a deep learning, black-box classifier. In: IJCAI 2020 XAI Workshop (2020) Ford, C., et al.: Play MNIST for me! User studies on the effects of post-hoc, example-based explanations & error rates on debugging a deep learning, black-box classifier. In: IJCAI 2020 XAI Workshop (2020)
9.
Zurück zum Zitat Forestier, G., et al.: Generating synthetic time series to augment sparse datasets. In: 2017 IEEE International Conference on Data Mining (2017) Forestier, G., et al.: Generating synthetic time series to augment sparse datasets. In: 2017 IEEE International Conference on Data Mining (2017)
11.
Zurück zum Zitat Gilpin, L.H., et al.: Explaining explanations: an approach to evaluating interpretability of machine learning. arXiv:1806.00069 (2018) Gilpin, L.H., et al.: Explaining explanations: an approach to evaluating interpretability of machine learning. arXiv:​1806.​00069 (2018)
13.
Zurück zum Zitat Karlsson, I., et al.: Explainable time series tweaking via irreversible and reversible temporal transformations. arXiv:1809.05183 (2018) Karlsson, I., et al.: Explainable time series tweaking via irreversible and reversible temporal transformations. arXiv:​1809.​05183 (2018)
14.
Zurück zum Zitat Keane, M., Kenny, E.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11CrossRef Keane, M., Kenny, E.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-29249-2_​11CrossRef
15.
Zurück zum Zitat Keane, M.T., Kenny, E.M.: The twin-system approach as one generic solution for XAI. In: IJCAI 2019 XAI Workshop (2019) Keane, M.T., Kenny, E.M.: The twin-system approach as one generic solution for XAI. In: IJCAI 2019 XAI Workshop (2019)
17.
Zurück zum Zitat Kenny, E.M., et al.: Bayesian case-exclusion and personalized explanations for sustainable dairy farming. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020) (2020) Kenny, E.M., et al.: Bayesian case-exclusion and personalized explanations for sustainable dairy farming. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020) (2020)
18.
Zurück zum Zitat Kenny, E., et al.: Predicting grass growth for sustainable dairy farming: a CBR system using Bayesian case-exclusion and post-hoc, personalized explanation-by-example (XAI). In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 172–187. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_12CrossRef Kenny, E., et al.: Predicting grass growth for sustainable dairy farming: a CBR system using Bayesian case-exclusion and post-hoc, personalized explanation-by-example (XAI). In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 172–187. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-29249-2_​12CrossRef
19.
Zurück zum Zitat Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. arXiv:2009.06399 (2020) Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. arXiv:​2009.​06399 (2020)
20.
Zurück zum Zitat Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019) Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
23.
Zurück zum Zitat Laugel, T., et al.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019) Laugel, T., et al.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019) (2019)
26.
Zurück zum Zitat Linyi, Y., et al.: Generating plausible counterfactual explanations for deep transformers in financial text classification. In: Proceedings of the 28th International Conference on Computational Linguistics (2020) Linyi, Y., et al.: Generating plausible counterfactual explanations for deep transformers in financial text classification. In: Proceedings of the 28th International Conference on Computational Linguistics (2020)
28.
Zurück zum Zitat Mittelstadt, B., et al.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019) Mittelstadt, B., et al.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)
29.
Zurück zum Zitat Mueen, A., Keogh, E.: Extracting optimal performance from dynamic time warping. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016) Mueen, A., Keogh, E.: Extracting optimal performance from dynamic time warping. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
30.
Zurück zum Zitat Nguyen, T.T., Le Nguyen, T., Ifrim, G.: A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification. In: Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G. (eds.) AALTD 2020. LNCS (LNAI), vol. 12588, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65742-0_6CrossRef Nguyen, T.T., Le Nguyen, T., Ifrim, G.: A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification. In: Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G. (eds.) AALTD 2020. LNCS (LNAI), vol. 12588, pp. 77–94. Springer, Cham (2020). https://​doi.​org/​10.​1007/​978-3-030-65742-0_​6CrossRef
33.
Zurück zum Zitat Papernot, N., McDaniel, P.: Deep k-Nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765 (2018) Papernot, N., McDaniel, P.: Deep k-Nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:​1803.​04765 (2018)
34.
Zurück zum Zitat Petitjean, F., et al.: A global averaging method for dynamic time warping, with applications to clustering. Pattern Recogn. 44, 678–693 (2011)CrossRef Petitjean, F., et al.: A global averaging method for dynamic time warping, with applications to clustering. Pattern Recogn. 44, 678–693 (2011)CrossRef
36.
Zurück zum Zitat Ribeiro, M.T., et al.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016 (2016) Ribeiro, M.T., et al.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 2016 (2016)
38.
Zurück zum Zitat Seah, J.C.Y., et al.: Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 290(2), 514–522 (2019)CrossRef Seah, J.C.Y., et al.: Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 290(2), 514–522 (2019)CrossRef
40.
Zurück zum Zitat Wachter, S., et al.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. SSRN J. 31 (2017) Wachter, S., et al.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. SSRN J. 31 (2017)
42.
Zurück zum Zitat Hohman, F., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning. IEEE Trans. Visual. Comput. Graphics 25, 2674–2693 (2018)CrossRef Hohman, F., Kahng, M., Pienta, R., Chau, D.H.: Visual analytics in deep learning. IEEE Trans. Visual. Comput. Graphics 25, 2674–2693 (2018)CrossRef
Metadaten
Titel
Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective
verfasst von
Eoin M. Kenny
Eoin D. Delaney
Derek Greene
Mark T. Keane
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-68796-0_2