Skip to main content

2020 | OriginalPaper | Buchkapitel

Explanation Ontology: A Model of Explanations for User-Centered AI

verfasst von : Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness

Erschienen in: The Semantic Web – ISWC 2020

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Explainability has been a goal for Artificial Intelligence (AI) systems since their conception, with the need for explainability growing as more complex AI models are increasingly used in critical, high-stakes settings such as healthcare. Explanations have often added to an AI system in a non-principled, post-hoc manner. With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system’s AI capabilities. We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types. We indicate how the ontology can support user requirements for explanations in the domain of healthcare. We evaluate our ontology with a set of competency questions geared towards a system designer who might use our ontology to decide which explanation types to include, given a combination of users’ needs and a system’s capabilities, both in system design settings and in real-time operations. Through the use of this ontology, system designers will be able to make informed choices on which explanations AI systems can and should provide.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)CrossRef Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)CrossRef
2.
Zurück zum Zitat Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019) Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:​1909.​03012 (2019)
3.
Zurück zum Zitat Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 8, p. 1 (2017) Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 8, p. 1 (2017)
4.
Zurück zum Zitat Chari, S., Gruen, D.M., Seneviratne, O., McGuinness, D.L.: Directions for explainable knowledge-enabled systems. In: Tiddi, I., Lecue, F., Hitzler, P. (eds.) Knowledge Graphs for Explainable AI - Foundations, Applications and Challenges. Studies on the Semantic Web. IOS Press (2020, to appear) Chari, S., Gruen, D.M., Seneviratne, O., McGuinness, D.L.: Directions for explainable knowledge-enabled systems. In: Tiddi, I., Lecue, F., Hitzler, P. (eds.) Knowledge Graphs for Explainable AI - Foundations, Applications and Challenges. Studies on the Semantic Web. IOS Press (2020, to appear)
5.
Zurück zum Zitat Chari, S., Gruen, D.M., Seneviratne, O., McGuinness, D.L.: Foundations of Explainable Knowledge-Enabled Systems. In: Tiddi, I., Lecue, F., Hitzler, P. (eds.) Knowledge Graphs for Explainable AI - Foundations, Applications and Challenges. Studies on the Semantic Web. IOS Press (2020, to appear) Chari, S., Gruen, D.M., Seneviratne, O., McGuinness, D.L.: Foundations of Explainable Knowledge-Enabled Systems. In: Tiddi, I., Lecue, F., Hitzler, P. (eds.) Knowledge Graphs for Explainable AI - Foundations, Applications and Challenges. Studies on the Semantic Web. IOS Press (2020, to appear)
6.
Zurück zum Zitat Dhaliwal, J.S., Benbasat, I.: The use and effects of knowledge-based system explanations: theoretical foundations and a framework for empirical evaluation. Inf. Syst. Res. 7(3), 342–362 (1996)CrossRef Dhaliwal, J.S., Benbasat, I.: The use and effects of knowledge-based system explanations: theoretical foundations and a framework for empirical evaluation. Inf. Syst. Res. 7(3), 342–362 (1996)CrossRef
7.
8.
Zurück zum Zitat Dumontier, M., et al.: The Semanticscience integrated ontology (SIO) for biomedical research and knowledge discovery. J. Biomed. Semant. 5(1), 14 (2014)CrossRef Dumontier, M., et al.: The Semanticscience integrated ontology (SIO) for biomedical research and knowledge discovery. J. Biomed. Semant. 5(1), 14 (2014)CrossRef
9.
Zurück zum Zitat Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017) Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:​1712.​09923 (2017)
10.
Zurück zum Zitat Lebo, T., et al.: Prov-o: the prov ontology. W3C recommendation (2013) Lebo, T., et al.: Prov-o: the prov ontology. W3C recommendation (2013)
11.
Zurück zum Zitat Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. arXiv preprint arXiv:2001.02478 (2020) Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. arXiv preprint arXiv:​2001.​02478 (2020)
12.
Zurück zum Zitat Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2119–2128. ACM (2009) Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2119–2128. ACM (2009)
14.
Zurück zum Zitat Lorin, M.I., Palazzi, D.L., Turner, T.L., Ward, M.A.: What is a clinical pearl and what is its role in medical education? Med. Teach. 30(9–10), 870–874 (2008)CrossRef Lorin, M.I., Palazzi, D.L., Turner, T.L., Ward, M.A.: What is a clinical pearl and what is its role in medical education? Med. Teach. 30(9–10), 870–874 (2008)CrossRef
15.
Zurück zum Zitat Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012) Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012)
16.
Zurück zum Zitat Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef
17.
Zurück zum Zitat Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 279–288. ACM (2019) Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 279–288. ACM (2019)
22.
Zurück zum Zitat Stefanelli, M., Ramoni, M.: Epistemological constraints on medical knowledge-based systems. In: Evans, D.A., Patel, V.L. (eds.) Advanced Models of Cognition for Medical Training and Practice. NATO ASI Series (Series F: Computer and Systems Sciences), vol. 97, pp. 3–20. Springer, Heidelberg (1992). https://doi.org/10.1007/978-3-662-02833-9_1CrossRef Stefanelli, M., Ramoni, M.: Epistemological constraints on medical knowledge-based systems. In: Evans, D.A., Patel, V.L. (eds.) Advanced Models of Cognition for Medical Training and Practice. NATO ASI Series (Series F: Computer and Systems Sciences), vol. 97, pp. 3–20. Springer, Heidelberg (1992). https://​doi.​org/​10.​1007/​978-3-662-02833-9_​1CrossRef
24.
Zurück zum Zitat Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, pp. 1–8 (2015) Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, pp. 1–8 (2015)
25.
Zurück zum Zitat Tu, S.W., Eriksson, H., Gennari, J.H., Shahar, Y., Musen, M.A.: Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support. Artif. Intell. Med. 7(3), 257–289 (1995)CrossRef Tu, S.W., Eriksson, H., Gennari, J.H., Shahar, Y., Musen, M.A.: Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support. Artif. Intell. Med. 7(3), 257–289 (1995)CrossRef
26.
Zurück zum Zitat Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2019) Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2019)
Metadaten
Titel
Explanation Ontology: A Model of Explanations for User-Centered AI
verfasst von
Shruthi Chari
Oshani Seneviratne
Daniel M. Gruen
Morgan A. Foreman
Amar K. Das
Deborah L. McGuinness
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-62466-8_15

Premium Partner