Skip to main content

2021 | OriginalPaper | Buchkapitel

SeXAI: A Semantic Explainable Artificial Intelligence Framework

verfasst von : Ivan Donadello, Mauro Dragoni

Erschienen in: AIxIA 2020 – Advances in Artificial Intelligence

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The interest in Explainable Artificial Intelligence (XAI) research is dramatically grown during the last few years. The main reason is the need of having systems that beyond being effective are also able to describe how a certain output has been obtained and to present such a description in a comprehensive manner with respect to the target users. A promising research direction making black boxes more transparent is the exploitation of semantic information. Such information can be exploited from different perspectives in order to provide a more comprehensive and interpretable representation of AI models. In this paper, we present the first version of SeXAI, a semantic-based explainable framework aiming to exploit semantic information for making black boxes more transparent. After a theoretical discussion, we show how this research direction is suitable and worthy of investigation by showing its application to a real-world use case.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
2
In the remaining of the paper, we will refer to some concepts defined within the HeLiS ontology. We leave to the reader the task of checking the meaning of each concept within the reference paper.
 
3
The dataset, its comparison and the code are available at https://​bit.​ly/​2Y7zSWZ.
 
Literatur
1.
Zurück zum Zitat Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
2.
Zurück zum Zitat Ai, Q., Azizi, V., Chen, X., Zhang, Y.: Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9), 137 (2018)MathSciNetCrossRef Ai, Q., Azizi, V., Chen, X., Zhang, Y.: Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9), 137 (2018)MathSciNetCrossRef
3.
Zurück zum Zitat Androutsopoulos, I., Lampouras, G., Galanis, D.: Generating natural language descriptions from OWL ontologies: the NaturalOWL system. J. Artif. Intell. Res. 48, 671–715 (2013)CrossRef Androutsopoulos, I., Lampouras, G., Galanis, D.: Generating natural language descriptions from OWL ontologies: the NaturalOWL system. J. Artif. Intell. Res. 48, 671–715 (2013)CrossRef
4.
Zurück zum Zitat Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge (2003)MATH Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge (2003)MATH
5.
Zurück zum Zitat Bauer, J., Sattler, U., Parsia, B.: Explaining by example: model exploration for ontology comprehension. In: Description Logics. CEUR Workshop Proceedings, vol. 477. CEUR-WS.org (2009) Bauer, J., Sattler, U., Parsia, B.: Explaining by example: model exploration for ontology comprehension. In: Description Logics. CEUR Workshop Proceedings, vol. 477. CEUR-WS.org (2009)
6.
Zurück zum Zitat Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics, 5th edn. Springer, New York (2007)MATH Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics, 5th edn. Springer, New York (2007)MATH
7.
Zurück zum Zitat Borgida, A., Franconi, E., Horrocks, I.: Explaining ALC subsumption. In: Horn, W. (ed.) ECAI 2000, Proceedings of the 14th European Conference on Artificial Intelligence, Berlin, Germany, 20–25 August 2000, pp. 209–213. IOS Press (2000) Borgida, A., Franconi, E., Horrocks, I.: Explaining ALC subsumption. In: Horn, W. (ed.) ECAI 2000, Proceedings of the 14th European Conference on Artificial Intelligence, Berlin, Germany, 20–25 August 2000, pp. 209–213. IOS Press (2000)
9.
Zurück zum Zitat Daniele, A., Serafini, L.: Neural networks enhancement through prior logical knowledge. CoRR abs/2009.06087 (2020) Daniele, A., Serafini, L.: Neural networks enhancement through prior logical knowledge. CoRR abs/2009.06087 (2020)
10.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009) Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)
11.
Zurück zum Zitat Diligenti, M., Gori, M., Saccà, C.: Semantic-based regularization for learning and inference. Artif. Intell. 244, 143–165 (2017)MathSciNetCrossRef Diligenti, M., Gori, M., Saccà, C.: Semantic-based regularization for learning and inference. Artif. Intell. 244, 143–165 (2017)MathSciNetCrossRef
12.
Zurück zum Zitat Donadello, I., Dragoni, M., Eccher, C.: Persuasive explanation of reasoning inferences on dietary data. In: SEMEX: 1st Workshop on Semantic Explainability, vol. 2465, pp. 46–61. CEUR-WS.org (2019) Donadello, I., Dragoni, M., Eccher, C.: Persuasive explanation of reasoning inferences on dietary data. In: SEMEX: 1st Workshop on Semantic Explainability, vol. 2465, pp. 46–61. CEUR-WS.org (2019)
14.
Zurück zum Zitat Donadello, I., Serafini, L.: Compensating supervision incompleteness with prior knowledge in semantic image interpretation. In: IJCNN, pp. 1–8. IEEE (2019) Donadello, I., Serafini, L.: Compensating supervision incompleteness with prior knowledge in semantic image interpretation. In: IJCNN, pp. 1–8. IEEE (2019)
15.
Zurück zum Zitat Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR Workshop Proceedings, Bari, Italy, 16–17 November 2017, vol. 2071. CEUR-WS.org (2017) Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR Workshop Proceedings, Bari, Italy, 16–17 November 2017, vol. 2071. CEUR-WS.org (2017)
18.
Zurück zum Zitat Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. University of Montreal 1341(3), 1 (2009) Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. University of Montreal 1341(3), 1 (2009)
19.
Zurück zum Zitat Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Bonchi, F., Provost, F.J., Eliassi-Rad, T., Wang, W., Cattuto, C., Ghani, R. (eds.) 5th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2018, Turin, Italy, 1–3 October 2018, pp. 80–89. IEEE (2018) Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Bonchi, F., Provost, F.J., Eliassi-Rad, T., Wang, W., Cattuto, C., Ghani, R. (eds.) 5th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2018, Turin, Italy, 1–3 October 2018, pp. 80–89. IEEE (2018)
20.
Zurück zum Zitat Hamed, R.G., Pandit, H.J., O’Sullivan, D., Conlan, O.: Explaining disclosure decisions over personal data. In: ISWC Satellites. CEUR Workshop Proceedings, vol. 2456, pp. 41–44. CEUR-WS.org (2019) Hamed, R.G., Pandit, H.J., O’Sullivan, D., Conlan, O.: Explaining disclosure decisions over personal data. In: ISWC Satellites. CEUR Workshop Proceedings, vol. 2456, pp. 41–44. CEUR-WS.org (2019)
21.
Zurück zum Zitat Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? CoRR abs/1712.09923 (2017) Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? CoRR abs/1712.09923 (2017)
22.
Zurück zum Zitat Holzinger, A., Kieseberg, P., Weippl, E., Tjoa, A.M.: Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 1–8. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_1CrossRef Holzinger, A., Kieseberg, P., Weippl, E., Tjoa, A.M.: Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 1–8. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-319-99740-7_​1CrossRef
23.
Zurück zum Zitat Kaljurand, K.: ACE view – an ontology and rule editor based on attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 432. CEUR-WS.org (2008) Kaljurand, K.: ACE view – an ontology and rule editor based on attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 432. CEUR-WS.org (2008)
24.
Zurück zum Zitat Kaljurand, K., Fuchs, N.E.: Verbalizing OWL in attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 258. CEUR-WS.org (2007) Kaljurand, K., Fuchs, N.E.: Verbalizing OWL in attempto controlled English. In: OWLED. CEUR Workshop Proceedings, vol. 258. CEUR-WS.org (2007)
26.
Zurück zum Zitat Kalyanpur, A., Parsia, B., Sirin, E., Hendler, J.A.: Debugging unsatisfiable classes in OWL ontologies. J. Web Semant. 3(4), 268–293 (2005)CrossRef Kalyanpur, A., Parsia, B., Sirin, E., Hendler, J.A.: Debugging unsatisfiable classes in OWL ontologies. J. Web Semant. 3(4), 268–293 (2005)CrossRef
27.
Zurück zum Zitat Kazakov, Y., Klinov, P., Stupnikov, A.: Towards reusable explanation services in protege. In: Description Logics. CEUR Workshop Proceedings, vol. 1879. CEUR-WS.org (2017) Kazakov, Y., Klinov, P., Stupnikov, A.: Towards reusable explanation services in protege. In: Description Logics. CEUR Workshop Proceedings, vol. 1879. CEUR-WS.org (2017)
28.
Zurück zum Zitat Khan, O.Z., Poupart, P., Black, J.P.: Explaining recommendations generated by MDPs. In: Roth-Berghofer, T., Schulz, S., Leake, D.B., Bahls, D. (eds.) Explanation-Aware Computing, Papers from the 2008 ECAI Workshop, Patras, Greece, 21–22 July 2008, pp. 13–24. University of Patras (2008) Khan, O.Z., Poupart, P., Black, J.P.: Explaining recommendations generated by MDPs. In: Roth-Berghofer, T., Schulz, S., Leake, D.B., Bahls, D. (eds.) Explanation-Aware Computing, Papers from the 2008 ECAI Workshop, Patras, Greece, 21–22 July 2008, pp. 13–24. University of Patras (2008)
29.
Zurück zum Zitat Kontopoulos, E., Bassiliades, N., Antoniou, G.: Visualizing semantic web proofs of defeasible logic in the DR-DEVICE system. Knowl.-Based Syst. 24(3), 406–419 (2011)CrossRef Kontopoulos, E., Bassiliades, N., Antoniou, G.: Visualizing semantic web proofs of defeasible logic in the DR-DEVICE system. Knowl.-Based Syst. 24(3), 406–419 (2011)CrossRef
30.
Zurück zum Zitat Lam, J.S.C.: Methods for resolving inconsistencies in ontologies. Ph.D. thesis, University of Aberdeen, UK (2007) Lam, J.S.C.: Methods for resolving inconsistencies in ontologies. Ph.D. thesis, University of Aberdeen, UK (2007)
31.
Zurück zum Zitat Lécué, F.: On the role of knowledge graphs in explainable AI. Semant. Web 11(1), 41–51 (2020)CrossRef Lécué, F.: On the role of knowledge graphs in explainable AI. Semant. Web 11(1), 41–51 (2020)CrossRef
32.
Zurück zum Zitat Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., Wu, J.: The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. In: ICLR. OpenReview.net (2019) Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., Wu, J.: The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. In: ICLR. OpenReview.net (2019)
33.
Zurück zum Zitat McGuinness, D.L., Borgida, A.: Explaining subsumption in description logics. In: IJCAI (1), pp. 816–821. Morgan Kaufmann (1995) McGuinness, D.L., Borgida, A.: Explaining subsumption in description logics. In: IJCAI (1), pp. 816–821. Morgan Kaufmann (1995)
34.
Zurück zum Zitat Neves, M., Ševa, J.: An extensive review of tools for manual annotation of documents. Briefings Bioinform. 22(1), 146–163 (2019)CrossRef Neves, M., Ševa, J.: An extensive review of tools for manual annotation of documents. Briefings Bioinform. 22(1), 146–163 (2019)CrossRef
35.
Zurück zum Zitat Robinson, J.A., Voronkov, A. (eds.): Handbook of Automated Reasoning, vol. 2. Elsevier and MIT Press (2001) Robinson, J.A., Voronkov, A. (eds.): Handbook of Automated Reasoning, vol. 2. Elsevier and MIT Press (2001)
37.
Zurück zum Zitat Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826. IEEE Computer Society (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826. IEEE Computer Society (2016)
38.
Zurück zum Zitat Vougiouklis, P., et al.: Neural wikipedian: generating textual summaries from knowledge base triples. J. Web Semant. 52–53, 1–15 (2018)CrossRef Vougiouklis, P., et al.: Neural wikipedian: generating textual summaries from knowledge base triples. J. Web Semant. 52–53, 1–15 (2018)CrossRef
Metadaten
Titel
SeXAI: A Semantic Explainable Artificial Intelligence Framework
verfasst von
Ivan Donadello
Mauro Dragoni
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-77091-4_4

Premium Partner