Skip to main content

2021 | OriginalPaper | Buchkapitel

Introducing Uncertainty into Explainable AI Methods

verfasst von : Szymon Bobek, Grzegorz J. Nalepa

Erschienen in: Computational Science – ICCS 2021

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Learning from uncertain or incomplete data is one of the major challenges in building artificial intelligence systems. However, the research in this area is more focused on the impact of uncertainty on the algorithms performance or robustness, rather than on human understanding of the model and the explainability of the system. In this paper we present our work in the field of knowledge discovery from uncertain data and show its potential usage for the purpose of improving system interpretability by generating Local Uncertain Explanations (LUX) for machine learning models. We present a method that allows to propagate uncertainty of data into the explanation model, providing more insight into the certainty of the decision making process and certainty of explanations of these decisions. We demonstrate the method on synthetic, reproducible dataset and compare it to the most popular explanation frameworks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
Literatur
1.
Zurück zum Zitat Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018) Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018)
2.
9.
10.
Zurück zum Zitat Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2001, pp. 97–106. ACM, New York (2001). https://doi.org/10.1145/502512.502529 Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2001, pp. 97–106. ACM, New York (2001). https://​doi.​org/​10.​1145/​502512.​502529
11.
Zurück zum Zitat Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc. (2017) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777. Curran Associates Inc. (2017)
12.
Zurück zum Zitat Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco (1993) Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco (1993)
13.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778 Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016). https://​doi.​org/​10.​1145/​2939672.​2939778
14.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: 32nd AAAI Conference on Artificial Intelligence. AAAI Publications (2018) Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: 32nd AAAI Conference on Artificial Intelligence. AAAI Publications (2018)
15.
Zurück zum Zitat Schank, R.C.: Explanation: a first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986) Schank, R.C.: Explanation: a first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986)
Metadaten
Titel
Introducing Uncertainty into Explainable AI Methods
verfasst von
Szymon Bobek
Grzegorz J. Nalepa
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-77980-1_34