Skip to main content
Top

Hint

Swipe to navigate through the articles of this issue

01-02-2023

Explainable AI: To Reveal the Logic of Black-Box Models

Authors: Chinu, Urvashi Bansal

Published in: New Generation Computing

Log in

Abstract

Artificial intelligence (AI) is continuously evolving; however, in the last 10 years, it has gotten considerably more difficult to explain AI models. With the help of explanations, end users can understand the outcomes generated by AI models. The proposed work has shown major issues and gaps in the literature. The main issues found in the literature are unfair/biased decisions made by the model, poor accuracy, reliability, and evaluation metrics to assess the effectiveness of explanations and security of data. Research results obtained in this proposed work highlight the needs, challenges, and opportunities in the field of Explainable artificial intelligence (XAI). How can we make artificial intelligence models explainable? Evaluation of explanations using metrics is the main contribution of this research work. Moreover, the proposed work analyzed different types of explanations, leading companies providing Explainable artificial intelligence services, and open-source tools available in the market for using Explainable artificial intelligence. Finally, based on the reviewed works, the proposed work well-found some future directions for designing more transparent models for artificial intelligence.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
2.
go back to reference Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable ai: A review of machine learning interpretability methods. Entropy 23(1), 18 (2020) CrossRef Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable ai: A review of machine learning interpretability methods. Entropy 23(1), 18 (2020) CrossRef
3.
go back to reference Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-ai systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable ai. arXiv preprint arXiv:​1902.​01876 (2019) Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-ai systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable ai. arXiv preprint arXiv:​1902.​01876 (2019)
4.
go back to reference Wong, L.J., McPherson, S.: Explainable neural network-based modulation classification via concept bottleneck models. In: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), pp. 0191–0196 (2021). IEEE Wong, L.J., McPherson, S.: Explainable neural network-based modulation classification via concept bottleneck models. In: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), pp. 0191–0196 (2021). IEEE
5.
go back to reference Kim, M.S., Yun, J.P., Park, P.: An explainable convolutional neural network for fault diagnosis in linear motion guide. IEEE Trans. Industr. Inf. 17(6), 4036–4045 (2020) CrossRef Kim, M.S., Yun, J.P., Park, P.: An explainable convolutional neural network for fault diagnosis in linear motion guide. IEEE Trans. Industr. Inf. 17(6), 4036–4045 (2020) CrossRef
6.
go back to reference Karn, R.R., Kudva, P., Huang, H., Suneja, S., Elfadel, I.M.: Cryptomining detection in container clouds using system calls and explainable machine learning. IEEE Trans. Parallel Distrib. Syst. 32(3), 674–691 (2020) CrossRef Karn, R.R., Kudva, P., Huang, H., Suneja, S., Elfadel, I.M.: Cryptomining detection in container clouds using system calls and explainable machine learning. IEEE Trans. Parallel Distrib. Syst. 32(3), 674–691 (2020) CrossRef
7.
go back to reference Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of parkinson’s disease using lime on datscan imagery. Comput. Biol. Med. 126, 104041 (2020) CrossRef Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of parkinson’s disease using lime on datscan imagery. Comput. Biol. Med. 126, 104041 (2020) CrossRef
8.
go back to reference Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using shapley additive explanations. Expert Syst. Appl. 186, 115736 (2021) CrossRef Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using shapley additive explanations. Expert Syst. Appl. 186, 115736 (2021) CrossRef
9.
go back to reference La Gatta, V., Moscato, V., Postiglione, M., Sperlì, G.: Pastle: Pivot-aided space transformation for local explanations. Pattern Recogn. Lett. 149, 67–74 (2021) CrossRef La Gatta, V., Moscato, V., Postiglione, M., Sperlì, G.: Pastle: Pivot-aided space transformation for local explanations. Pattern Recogn. Lett. 149, 67–74 (2021) CrossRef
10.
go back to reference Kiefer, S.: Case: Explaining text classifications by fusion of local surrogate explanation models with contextual and semantic knowledge. Information Fusion 77, 184–195 (2022) CrossRef Kiefer, S.: Case: Explaining text classifications by fusion of local surrogate explanation models with contextual and semantic knowledge. Information Fusion 77, 184–195 (2022) CrossRef
11.
go back to reference Giudici, P., Raffinetti, E.: Shapley-lorenz explainable artificial intelligence. Expert Syst. Appl. 167, 114104 (2021) CrossRef Giudici, P., Raffinetti, E.: Shapley-lorenz explainable artificial intelligence. Expert Syst. Appl. 167, 114104 (2021) CrossRef
12.
go back to reference Barbado, A., Corcho, Ó., Benjamins, R.: Rule extraction in unsupervised anomaly detection for model explainability: Application to oneclass svm. Expert Syst. Appl. 189, 116100 (2022) CrossRef Barbado, A., Corcho, Ó., Benjamins, R.: Rule extraction in unsupervised anomaly detection for model explainability: Application to oneclass svm. Expert Syst. Appl. 189, 116100 (2022) CrossRef
13.
go back to reference Haldar, S., John, P.G., Saha, D.: Reliable counterfactual explanations for autoencoder based anomalies. In: 8th ACM IKDD CODS and 26th COMAD, pp. 83–91 (2021) Haldar, S., John, P.G., Saha, D.: Reliable counterfactual explanations for autoencoder based anomalies. In: 8th ACM IKDD CODS and 26th COMAD, pp. 83–91 (2021)
14.
go back to reference Neves, I., Folgado, D., Santos, S., Barandas, M., Campagner, A., Ronzio, L., Cabitza, F., Gamboa, H.: Interpretable heartbeat classification using local model-agnostic explanations on ecgs. Comput. Biol. Med. 133, 104393 (2021) CrossRef Neves, I., Folgado, D., Santos, S., Barandas, M., Campagner, A., Ronzio, L., Cabitza, F., Gamboa, H.: Interpretable heartbeat classification using local model-agnostic explanations on ecgs. Comput. Biol. Med. 133, 104393 (2021) CrossRef
15.
go back to reference Sachan, S., Yang, J.-B., Xu, D.-L., Benavides, D.E., Li, Y.: An explainable ai decision-support-system to automate loan underwriting. Expert Syst. Appl. 144, 113100 (2020) CrossRef Sachan, S., Yang, J.-B., Xu, D.-L., Benavides, D.E., Li, Y.: An explainable ai decision-support-system to automate loan underwriting. Expert Syst. Appl. 144, 113100 (2020) CrossRef
16.
go back to reference Liu, Q., Huang, Z., Yin, Y., Chen, E., Xiong, H., Su, Y., Hu, G.: Ekt: Exercise-aware knowledge tracing for student performance prediction. IEEE Trans. Knowl. Data Eng. 33(1), 100–115 (2019) CrossRef Liu, Q., Huang, Z., Yin, Y., Chen, E., Xiong, H., Su, Y., Hu, G.: Ekt: Exercise-aware knowledge tracing for student performance prediction. IEEE Trans. Knowl. Data Eng. 33(1), 100–115 (2019) CrossRef
17.
go back to reference Cheng, X., Wang, J., Li, H., Zhang, Y., Wu, L., Liu, Y.: A method to evaluate task-specific importance of spatio-temporal units based on explainable artificial intelligence. Int. J. Geogr. Inf. Sci. 35(10), 2002–2025 (2021) CrossRef Cheng, X., Wang, J., Li, H., Zhang, Y., Wu, L., Liu, Y.: A method to evaluate task-specific importance of spatio-temporal units based on explainable artificial intelligence. Int. J. Geogr. Inf. Sci. 35(10), 2002–2025 (2021) CrossRef
18.
go back to reference Keele, S., et al.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse (2007) Keele, S., et al.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse (2007)
19.
go back to reference Achtibat, R., Dreyer, M., Eisenbraun, I., Bosse, S., Wiegand, T., Samek, W., Lapuschkin, S.: From" where" to" what": Towards human-understandable explanations through concept relevance propagation. arXiv preprint arXiv:​2206.​03208 (2022) Achtibat, R., Dreyer, M., Eisenbraun, I., Bosse, S., Wiegand, T., Samek, W., Lapuschkin, S.: From" where" to" what": Towards human-understandable explanations through concept relevance propagation. arXiv preprint arXiv:​2206.​03208 (2022)
21.
go back to reference with Python, E.A.: Gianfagna, Leonida and Di Cecco, Antonio. Springer, Berlin/Heidelberg (2021) with Python, E.A.: Gianfagna, Leonida and Di Cecco, Antonio. Springer, Berlin/Heidelberg (2021)
22.
go back to reference Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019) CrossRef Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019) CrossRef
23.
go back to reference Fiandrino, C., Attanasio, G., Fiore, M., Widmer, J.: Toward native explainable and robust ai in 6g networks: Current state, challenges and road ahead. Comput. Commun. 193, 47–52 (2022) CrossRef Fiandrino, C., Attanasio, G., Fiore, M., Widmer, J.: Toward native explainable and robust ai in 6g networks: Current state, challenges and road ahead. Comput. Commun. 193, 47–52 (2022) CrossRef
24.
go back to reference Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76, 89–106 (2021) CrossRef Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76, 89–106 (2021) CrossRef
25.
go back to reference Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. Int. J. Hum Comput Stud. 146, 102551 (2021) CrossRef Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. Int. J. Hum Comput Stud. 146, 102551 (2021) CrossRef
26.
go back to reference Sharma, D.K., Mishra, J., Singh, A., Govil, R., Srivastava, G., Lin, J.C.-W.: Explainable artificial intelligence for cybersecurity. Comput. Electr. Eng. 103, 108356 (2022) CrossRef Sharma, D.K., Mishra, J., Singh, A., Govil, R., Srivastava, G., Lin, J.C.-W.: Explainable artificial intelligence for cybersecurity. Comput. Electr. Eng. 103, 108356 (2022) CrossRef
27.
go back to reference Yerlikaya, F.A., Bahtiyar, Ş: Data poisoning attacks against machine learning algorithms. Expert Syst. Appl. 208, 118101 (2022) CrossRef Yerlikaya, F.A., Bahtiyar, Ş: Data poisoning attacks against machine learning algorithms. Expert Syst. Appl. 208, 118101 (2022) CrossRef
28.
go back to reference Himeur, Y., Sohail, S.S., Bensaali, F., Amira, A., Alazab, M.: Latest trends of security and privacy in recommender systems: A comprehensive review and future perspectives. Computers & Security, 102746 (2022) Himeur, Y., Sohail, S.S., Bensaali, F., Amira, A., Alazab, M.: Latest trends of security and privacy in recommender systems: A comprehensive review and future perspectives. Computers & Security, 102746 (2022)
29.
30.
go back to reference Siering, M.: Explainability and fairness of regtech for regulatory enforcement: Automated monitoring of consumer complaints. Decis. Support Syst. 158, 113782 (2022) CrossRef Siering, M.: Explainability and fairness of regtech for regulatory enforcement: Automated monitoring of consumer complaints. Decis. Support Syst. 158, 113782 (2022) CrossRef
31.
go back to reference Jin, W., Li, X., Fatehi, M., Hamarneh, G.: Guidelines and evaluation for clinical explainable ai on medical image analysis. arXiv preprint arXiv:​2202.​10553 (2022) Jin, W., Li, X., Fatehi, M., Hamarneh, G.: Guidelines and evaluation for clinical explainable ai on medical image analysis. arXiv preprint arXiv:​2202.​10553 (2022)
33.
go back to reference Bartler, A., Hinderer, D., Yang, B.: Grad-lam: Visualization of deep neural networks for unsupervised learning. In: 2020 28th European Signal Processing Conference (EUSIPCO), pp. 1407–1411 (2021). IEEE Bartler, A., Hinderer, D., Yang, B.: Grad-lam: Visualization of deep neural networks for unsupervised learning. In: 2020 28th European Signal Processing Conference (EUSIPCO), pp. 1407–1411 (2021). IEEE
34.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: " why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: " why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
35.
go back to reference Roshan, K., Zafar, A.: Using kernel shap xai method to optimize the network anomaly detection model. In: 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), pp. 74–80 (2022). IEEE Roshan, K., Zafar, A.: Using kernel shap xai method to optimize the network anomaly detection model. In: 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), pp. 74–80 (2022). IEEE
36.
go back to reference Karim, M.R., Jiao, J., Döhmen, T., Cochez, M., Beyan, O., Rebholz-Schuhmann, D., Decker, S.: Deepkneeexplainer: explainable knee osteoarthritis diagnosis from radiographs and magnetic resonance imaging. IEEE Access 9, 39757–39780 (2021) CrossRef Karim, M.R., Jiao, J., Döhmen, T., Cochez, M., Beyan, O., Rebholz-Schuhmann, D., Decker, S.: Deepkneeexplainer: explainable knee osteoarthritis diagnosis from radiographs and magnetic resonance imaging. IEEE Access 9, 39757–39780 (2021) CrossRef
37.
go back to reference Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153 (2017). PMLR Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153 (2017). PMLR
38.
go back to reference Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
39.
go back to reference Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82(4), 1059–1086 (2020) MATHCrossRef Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82(4), 1059–1086 (2020) MATHCrossRef
40.
go back to reference Chen, J., Song, L., Wainwright, M., Jordan, M.: Learning to explain: An information-theoretic perspective on model interpretation. In: International Conference on Machine Learning, pp. 883–892 (2018). PMLR Chen, J., Song, L., Wainwright, M., Jordan, M.: Learning to explain: An information-theoretic perspective on model interpretation. In: International Conference on Machine Learning, pp. 883–892 (2018). PMLR
41.
go back to reference Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328 (2017). PMLR
42.
go back to reference Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:​1409.​0473 (2014) Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:​1409.​0473 (2014)
43.
go back to reference Stepin, I., Alonso, J.M., Catala, A., Pereira-Fariña, M.: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access 9, 11974–12001 (2021) CrossRef Stepin, I., Alonso, J.M., Catala, A., Pereira-Fariña, M.: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence. IEEE Access 9, 11974–12001 (2021) CrossRef
44.
go back to reference Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020) Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)
45.
go back to reference Sharma, S., Henderson, J., Ghosh, J.: Certifai: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. arXiv preprint arXiv:​1905.​07857 (2019) Sharma, S., Henderson, J., Ghosh, J.: Certifai: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. arXiv preprint arXiv:​1905.​07857 (2019)
47.
go back to reference Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G., Aggarwal, C.: Efficient data representation by selecting prototypes with importance weights. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 260–269 (2019). IEEE Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G., Aggarwal, C.: Efficient data representation by selecting prototypes with importance weights. In: 2019 IEEE International Conference on Data Mining (ICDM), pp. 260–269 (2019). IEEE
48.
go back to reference Looveren, A.V., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 650–665 (2021). Springer Looveren, A.V., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 650–665 (2021). Springer
49.
go back to reference Kumar, D., Wong, A., Taylor, G.W.: Explaining the unexplained: A class-enhanced attentive response (clear) approach to understanding deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 36–44 (2017) Kumar, D., Wong, A., Taylor, G.W.: Explaining the unexplained: A class-enhanced attentive response (clear) approach to understanding deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 36–44 (2017)
50.
go back to reference Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems 31 (2018) Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems 31 (2018)
51.
go back to reference Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: Face: feasible and actionable counterfactual explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350 (2020) Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: Face: feasible and actionable counterfactual explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 344–350 (2020)
52.
go back to reference Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, pp. 2668–2677 (2018). PMLR Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, pp. 2668–2677 (2018). PMLR
53.
go back to reference Thiagarajan, J.J., Kailkhura, B., Sattigeri, P., Ramamurthy, K.N.: Treeview: Peeking into deep neural networks via feature-space partitioning. arXiv preprint arXiv:​1611.​07429 (2016) Thiagarajan, J.J., Kailkhura, B., Sattigeri, P., Ramamurthy, K.N.: Treeview: Peeking into deep neural networks via feature-space partitioning. arXiv preprint arXiv:​1611.​07429 (2016)
54.
go back to reference Rauber, P.E., Fadel, S.G., Falcao, A.X., Telea, A.C.: Visualizing the hidden activity of artificial neural networks. IEEE Trans. Visual Comput. Graphics 23(1), 101–110 (2016) CrossRef Rauber, P.E., Fadel, S.G., Falcao, A.X., Telea, A.C.: Visualizing the hidden activity of artificial neural networks. IEEE Trans. Visual Comput. Graphics 23(1), 101–110 (2016) CrossRef
55.
go back to reference Sasaki, H., Hidaka, Y., Igarashi, H.: Explainable deep neural network for design of electric motors. IEEE Trans. Magn. 57(6), 1–4 (2021) CrossRef Sasaki, H., Hidaka, Y., Igarashi, H.: Explainable deep neural network for design of electric motors. IEEE Trans. Magn. 57(6), 1–4 (2021) CrossRef
56.
go back to reference Li, L., Wang, B., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Scouter: Slot attention-based classifier for explainable image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1046–1055 (2021) Li, L., Wang, B., Verma, M., Nakashima, Y., Kawasaki, R., Nagahara, H.: Scouter: Slot attention-based classifier for explainable image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1046–1055 (2021)
57.
go back to reference Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., Jiao, J.: Weakly supervised instance segmentation using class peak response. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3791–3800 (2018) Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., Jiao, J.: Weakly supervised instance segmentation using class peak response. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3791–3800 (2018)
58.
go back to reference Liu, G., Gifford, D.: Visualizing feature maps in deep neural networks using deepresolve. a genomics case study. In: Proceedings of the International Conference on Machine Learning-Workshop on Visualization for Deep Learning, Sydney, Australia, pp. 32–41 (2017) Liu, G., Gifford, D.: Visualizing feature maps in deep neural networks using deepresolve. a genomics case study. In: Proceedings of the International Conference on Machine Learning-Workshop on Visualization for Deep Learning, Sydney, Australia, pp. 32–41 (2017)
59.
go back to reference Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. Advances in neural information processing systems 29 (2016) Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. Advances in neural information processing systems 29 (2016)
60.
go back to reference Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J.M., Parikh, D., Batra, D.: Visual dialog. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 326–335 (2017) Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J.M., Parikh, D., Batra, D.: Visual dialog. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 326–335 (2017)
63.
go back to reference Dong, Y., Su, H., Zhu, J., Zhang, B.: Improving interpretability of deep neural networks with semantic information. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4306–4314 (2017) Dong, Y., Su, H., Zhu, J., Zhang, B.: Improving interpretability of deep neural networks with semantic information. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4306–4314 (2017)
64.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: Nothing else matters: Model-agnostic explanations by identifying prediction invariance. arXiv preprint arXiv:​1611.​05817 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Nothing else matters: Model-agnostic explanations by identifying prediction invariance. arXiv preprint arXiv:​1611.​05817 (2016)
65.
go back to reference Vásquez-Morales, G.R., Martinez-Monterrubio, S.M., Moreno-Ger, P., Recio-Garcia, J.A.: Explainable prediction of chronic renal disease in the colombian population using neural networks and case-based reasoning. Ieee Access 7, 152900–152910 (2019) CrossRef Vásquez-Morales, G.R., Martinez-Monterrubio, S.M., Moreno-Ger, P., Recio-Garcia, J.A.: Explainable prediction of chronic renal disease in the colombian population using neural networks and case-based reasoning. Ieee Access 7, 152900–152910 (2019) CrossRef
66.
go back to reference Alvarez Melis, D., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems 31 (2018) Alvarez Melis, D., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. Advances in neural information processing systems 31 (2018)
67.
go back to reference Seibold, C., Hilsmann, A., Eisert, P.: Focused lrp: Explainable ai for face morphing attack detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 88–96 (2021) Seibold, C., Hilsmann, A., Eisert, P.: Focused lrp: Explainable ai for face morphing attack detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 88–96 (2021)
68.
go back to reference Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: Xrai: Better attributions through regions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4948–4957 (2019) Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: Xrai: Better attributions through regions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4948–4957 (2019)
69.
go back to reference Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
70.
go back to reference Lamy, J.-B., Sekar, B., Guezennec, G., Bouaud, J., Séroussi, B.: Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif. Intell. Med. 94, 42–53 (2019) CrossRef Lamy, J.-B., Sekar, B., Guezennec, G., Bouaud, J., Séroussi, B.: Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif. Intell. Med. 94, 42–53 (2019) CrossRef
71.
go back to reference Samek, W., Montavon, G., Binder, A., Lapuschkin, S., Müller, K.-R.: Interpreting the predictions of complex ml models by layer-wise relevance propagation. arXiv preprint arXiv:​1611.​08191 (2016) Samek, W., Montavon, G., Binder, A., Lapuschkin, S., Müller, K.-R.: Interpreting the predictions of complex ml models by layer-wise relevance propagation. arXiv preprint arXiv:​1611.​08191 (2016)
72.
go back to reference Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. Frontiers in big Data, 39 (2021) Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. Frontiers in big Data, 39 (2021)
74.
go back to reference Sharma, S., Henderson, J., Ghosh, J.: Certifai: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 166–172 (2020) Sharma, S., Henderson, J., Ghosh, J.: Certifai: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 166–172 (2020)
75.
go back to reference Wu, H., Chen, W., Xu, S., Xu, B.: Counterfactual supporting facts extraction for explainable medical record based diagnosis with graph network. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1942–1955 (2021) Wu, H., Chen, W., Xu, S., Xu, B.: Counterfactual supporting facts extraction for explainable medical record based diagnosis with graph network. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1942–1955 (2021)
76.
go back to reference Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems 29 (2016) Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems 29 (2016)
77.
78.
go back to reference Wood-Doughty, Z., Cachola, I., Dredze, M.: Model distillation for faithful explanations of medical code predictions. In: Proceedings of the 21st Workshop on Biomedical Language Processing, pp. 412–425 (2022) Wood-Doughty, Z., Cachola, I., Dredze, M.: Model distillation for faithful explanations of medical code predictions. In: Proceedings of the 21st Workshop on Biomedical Language Processing, pp. 412–425 (2022)
79.
go back to reference Alharbi, R., Vu, M.N., Thai, M.T.: Learning interpretation with explainable knowledge distillation. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 705–714 (2021). IEEE Alharbi, R., Vu, M.N., Thai, M.T.: Learning interpretation with explainable knowledge distillation. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 705–714 (2021). IEEE
80.
go back to reference Dalleiger, S., Vreeken, J.: Explainable data decompositions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 3709–3716 (2020) Dalleiger, S., Vreeken, J.: Explainable data decompositions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 3709–3716 (2020)
82.
go back to reference Greenwell, B.M., Boehmke, B.C., McCarthy, A.J.: A simple and effective model-based variable importance measure. arXiv preprint arXiv:​1805.​04755 (2018) Greenwell, B.M., Boehmke, B.C., McCarthy, A.J.: A simple and effective model-based variable importance measure. arXiv preprint arXiv:​1805.​04755 (2018)
84.
go back to reference Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 5–22. Springer, ??? (2019) Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 5–22. Springer, ??? (2019)
85.
go back to reference Chittajallu, D.R., Dong, B., Tunison, P., Collins, R., Wells, K., Fleshman, J., Sankaranarayanan, G., Schwaitzberg, S., Cavuoto, L., Enquobahrie, A.: Xai-cbir: Explainable ai system for content based retrieval of video frames from minimally invasive surgery videos. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 66–69 (2019). IEEE Chittajallu, D.R., Dong, B., Tunison, P., Collins, R., Wells, K., Fleshman, J., Sankaranarayanan, G., Schwaitzberg, S., Cavuoto, L., Enquobahrie, A.: Xai-cbir: Explainable ai system for content based retrieval of video frames from minimally invasive surgery videos. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 66–69 (2019). IEEE
86.
go back to reference Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artif. Intell. 298, 103502 (2021) MATHCrossRef Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artif. Intell. 298, 103502 (2021) MATHCrossRef
87.
go back to reference Almutairi, M., Stahl, F., Bramer, M.: Reg-rules: an explainable rule-based ensemble learner for classification. IEEE Access 9, 52015–52035 (2021) CrossRef Almutairi, M., Stahl, F., Bramer, M.: Reg-rules: an explainable rule-based ensemble learner for classification. IEEE Access 9, 52015–52035 (2021) CrossRef
88.
go back to reference Keneni, B.M., Kaur, D., Al Bataineh, A., Devabhaktuni, V.K., Javaid, A.Y., Zaientz, J.D., Marinier, R.P.: Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7, 17001–17016 (2019) CrossRef Keneni, B.M., Kaur, D., Al Bataineh, A., Devabhaktuni, V.K., Javaid, A.Y., Zaientz, J.D., Marinier, R.P.: Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7, 17001–17016 (2019) CrossRef
89.
go back to reference Mahbooba, B., Timilsina, M., Sahal, R., Serrano, M.: Explainable artificial intelligence (xai) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021 (2021) Mahbooba, B., Timilsina, M., Sahal, R., Serrano, M.: Explainable artificial intelligence (xai) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021 (2021)
90.
go back to reference Loyola-Gonzalez, O., Gutierrez-Rodríguez, A.E., Medina-Pérez, M.A., Monroy, R., Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Garcia-Borroto, M.: An explainable artificial intelligence model for clustering numerical databases. IEEE Access 8, 52370–52384 (2020) CrossRef Loyola-Gonzalez, O., Gutierrez-Rodríguez, A.E., Medina-Pérez, M.A., Monroy, R., Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Garcia-Borroto, M.: An explainable artificial intelligence model for clustering numerical databases. IEEE Access 8, 52370–52384 (2020) CrossRef
91.
go back to reference Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech. 31, 841 (2017) Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech. 31, 841 (2017)
92.
go back to reference Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., Getoor, L.: Generating and understanding personalized explanations in hybrid recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10(4), 1–40 (2020) CrossRef Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., Getoor, L.: Generating and understanding personalized explanations in hybrid recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10(4), 1–40 (2020) CrossRef
94.
go back to reference Guidotti, R.: Evaluating local explanation methods on ground truth. Artif. Intell. 291, 103428 (2021) MATHCrossRef Guidotti, R.: Evaluating local explanation methods on ground truth. Artif. Intell. 291, 103428 (2021) MATHCrossRef
95.
go back to reference Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 56–67 (2020) Sokol, K., Flach, P.: Explainability fact sheets: a framework for systematic assessment of explainable approaches. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 56–67 (2020)
96.
97.
go back to reference Dieber, J., Kirrane, S.: A novel model usability evaluation framework (muse) for explainable artificial intelligence. Information Fusion 81, 143–153 (2022) CrossRef Dieber, J., Kirrane, S.: A novel model usability evaluation framework (muse) for explainable artificial intelligence. Information Fusion 81, 143–153 (2022) CrossRef
98.
go back to reference Lin, Y.-S., Lee, W.-C., Celik, Z.B.: What do you see? evaluation of explainable artificial intelligence (xai) interpretability through neural backdoors. arXiv preprint arXiv:​2009.​10639 (2020) Lin, Y.-S., Lee, W.-C., Celik, Z.B.: What do you see? evaluation of explainable artificial intelligence (xai) interpretability through neural backdoors. arXiv preprint arXiv:​2009.​10639 (2020)
99.
go back to reference Ozyegen, O., Ilic, I., Cevik, M.: Evaluation of interpretability methods for multivariate time series forecasting. Appl. Intell. 52(5), 4727–4743 (2022) CrossRef Ozyegen, O., Ilic, I., Cevik, M.: Evaluation of interpretability methods for multivariate time series forecasting. Appl. Intell. 52(5), 4727–4743 (2022) CrossRef
100.
go back to reference Löfström, H., Hammar, K., Johansson, U.: A meta survey of quality evaluation criteria in explanation methods. In: International Conference on Advanced Information Systems Engineering, pp. 55–63 (2022). Springer Löfström, H., Hammar, K., Johansson, U.: A meta survey of quality evaluation criteria in explanation methods. In: International Conference on Advanced Information Systems Engineering, pp. 55–63 (2022). Springer
101.
go back to reference Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., van Keulen, M., Seifert, C.: From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. arXiv preprint arXiv:​2201.​08164 (2022) Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., van Keulen, M., Seifert, C.: From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. arXiv preprint arXiv:​2201.​08164 (2022)
102.
go back to reference Bibal, A., Frénay, B.: Interpretability of machine learning models and representations: an introduction. In: ESANN (2016) Bibal, A., Frénay, B.: Interpretability of machine learning models and representations: an introduction. In: ESANN (2016)
103.
104.
go back to reference Anysz, H., Brzozowski, Ł, Kretowicz, W., Narloch, P.: Feature importance of stabilised rammed earth components affecting the compressive strength calculated with explainable artificial intelligence tools. Materials 13(10), 2317 (2020) CrossRef Anysz, H., Brzozowski, Ł, Kretowicz, W., Narloch, P.: Feature importance of stabilised rammed earth components affecting the compressive strength calculated with explainable artificial intelligence tools. Materials 13(10), 2317 (2020) CrossRef
105.
go back to reference Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics 10(5), 593 (2021) CrossRef Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics 10(5), 593 (2021) CrossRef
106.
go back to reference Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, pp. 45–50 (2021) Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, pp. 45–50 (2021)
107.
go back to reference Kavya, R., Christopher, J., Panda, S., Lazarus, Y.B.: Machine learning and xai approaches for allergy diagnosis. Biomed. Signal Process. Control 69, 102681 (2021) CrossRef Kavya, R., Christopher, J., Panda, S., Lazarus, Y.B.: Machine learning and xai approaches for allergy diagnosis. Biomed. Signal Process. Control 69, 102681 (2021) CrossRef
108.
go back to reference Amoroso, N., Pomarico, D., Fanizzi, A., Didonna, V., Giotta, F., La Forgia, D., Latorre, A., Monaco, A., Pantaleo, E., Petruzzellis, N., et al.: A roadmap towards breast cancer therapies supported by explainable artificial intelligence. Appl. Sci. 11(11), 4881 (2021) CrossRef Amoroso, N., Pomarico, D., Fanizzi, A., Didonna, V., Giotta, F., La Forgia, D., Latorre, A., Monaco, A., Pantaleo, E., Petruzzellis, N., et al.: A roadmap towards breast cancer therapies supported by explainable artificial intelligence. Appl. Sci. 11(11), 4881 (2021) CrossRef
109.
go back to reference Chan, M.-C., Pai, K.-C., Su, S.-A., Wang, M.-S., Wu, C.-L., Chao, W.-C.: Explainable machine learning to predict long-term mortality in critically ill ventilated patients: a retrospective study in central taiwan. BMC Med. Inform. Decis. Mak. 22(1), 1–11 (2022) CrossRef Chan, M.-C., Pai, K.-C., Su, S.-A., Wang, M.-S., Wu, C.-L., Chao, W.-C.: Explainable machine learning to predict long-term mortality in critically ill ventilated patients: a retrospective study in central taiwan. BMC Med. Inform. Decis. Mak. 22(1), 1–11 (2022) CrossRef
110.
go back to reference Peng, J., Zou, K., Zhou, M., Teng, Y., Zhu, X., Zhang, F., Xu, J.: An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients. J. Med. Syst. 45(5), 1–9 (2021) CrossRef Peng, J., Zou, K., Zhou, M., Teng, Y., Zhu, X., Zhang, F., Xu, J.: An explainable artificial intelligence framework for the deterioration risk prediction of hepatitis patients. J. Med. Syst. 45(5), 1–9 (2021) CrossRef
111.
go back to reference Chen, J., Dai, X., Yuan, Q., Lu, C., Huang, H.: Towards interpretable clinical diagnosis with bayesian network ensembles stacked on entity-aware cnns. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3143–3153 (2020) Chen, J., Dai, X., Yuan, Q., Lu, C., Huang, H.: Towards interpretable clinical diagnosis with bayesian network ensembles stacked on entity-aware cnns. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3143–3153 (2020)
112.
go back to reference Rucco, M., Viticchi, G., Falsetti, L.: Towards personalized diagnosis of glioblastoma in fluid-attenuated inversion recovery (flair) by topological interpretable machine learning. Mathematics 8(5), 770 (2020) CrossRef Rucco, M., Viticchi, G., Falsetti, L.: Towards personalized diagnosis of glioblastoma in fluid-attenuated inversion recovery (flair) by topological interpretable machine learning. Mathematics 8(5), 770 (2020) CrossRef
113.
go back to reference Gu, D., Li, Y., Jiang, F., Wen, Z., Liu, S., Shi, W., Lu, G., Zhou, C.: Vinet: A visually interpretable image diagnosis network. IEEE Trans. Multimedia 22(7), 1720–1729 (2020) CrossRef Gu, D., Li, Y., Jiang, F., Wen, Z., Liu, S., Shi, W., Lu, G., Zhou, C.: Vinet: A visually interpretable image diagnosis network. IEEE Trans. Multimedia 22(7), 1720–1729 (2020) CrossRef
114.
go back to reference Laios, A., Kalampokis, E., Johnson, R., Thangavelu, A., Tarabanis, C., Nugent, D., De Jong, D.: Explainable artificial intelligence for prediction of complete surgical cytoreduction in advanced-stage epithelial ovarian cancer. Journal of personalized medicine 12(4), 607 (2022) CrossRef Laios, A., Kalampokis, E., Johnson, R., Thangavelu, A., Tarabanis, C., Nugent, D., De Jong, D.: Explainable artificial intelligence for prediction of complete surgical cytoreduction in advanced-stage epithelial ovarian cancer. Journal of personalized medicine 12(4), 607 (2022) CrossRef
115.
go back to reference Wesołowski, S., Lemmon, G., Hernandez, E.J., Henrie, A., Miller, T.A., Weyhrauch, D., Puchalski, M.D., Bray, B.E., Shah, R.U., Deshmukh, V.G., et al.: An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records. PLOS digital health 1(1), 0000004 (2022) CrossRef Wesołowski, S., Lemmon, G., Hernandez, E.J., Henrie, A., Miller, T.A., Weyhrauch, D., Puchalski, M.D., Bray, B.E., Shah, R.U., Deshmukh, V.G., et al.: An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records. PLOS digital health 1(1), 0000004 (2022) CrossRef
116.
go back to reference Lucieri, A., Bajwa, M.N., Braun, S.A., Malik, M.I., Dengel, A., Ahmed, S.: Exaid: A multimodal explanation framework for computer-aided diagnosis of skin lesions. Comput. Methods Programs Biomed. 215, 106620 (2022) CrossRef Lucieri, A., Bajwa, M.N., Braun, S.A., Malik, M.I., Dengel, A., Ahmed, S.: Exaid: A multimodal explanation framework for computer-aided diagnosis of skin lesions. Comput. Methods Programs Biomed. 215, 106620 (2022) CrossRef
117.
go back to reference Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable ai systems for the medical domain? arXiv preprint arXiv:​1712.​09923 (2017) Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable ai systems for the medical domain? arXiv preprint arXiv:​1712.​09923 (2017)
118.
go back to reference van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating xai: A comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021) MATHCrossRef van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating xai: A comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021) MATHCrossRef
119.
go back to reference Alonso, J.M.: Teaching explainable artificial intelligence to high school students. International Journal of Computational Intelligence Systems 13(1), 974–987 (2020) CrossRef Alonso, J.M.: Teaching explainable artificial intelligence to high school students. International Journal of Computational Intelligence Systems 13(1), 974–987 (2020) CrossRef
120.
go back to reference Mirchi, N., Bissonnette, V., Yilmaz, R., Ledwos, N., Winkler-Schwartz, A., Del Maestro, R.F.: The virtual operative assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. PLoS ONE 15(2), 0229596 (2020) CrossRef Mirchi, N., Bissonnette, V., Yilmaz, R., Ledwos, N., Winkler-Schwartz, A., Del Maestro, R.F.: The virtual operative assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. PLoS ONE 15(2), 0229596 (2020) CrossRef
121.
go back to reference Kim, J., Canny, J.: Interpretable learning for self-driving cars by visualizing causal attention. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2942–2950 (2017) Kim, J., Canny, J.: Interpretable learning for self-driving cars by visualizing causal attention. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2942–2950 (2017)
122.
go back to reference Chen, H.-Y., Lee, C.-H.: Vibration signals analysis by explainable artificial intelligence (xai) approach: Application on bearing faults diagnosis. IEEE Access 8, 134246–134256 (2020) CrossRef Chen, H.-Y., Lee, C.-H.: Vibration signals analysis by explainable artificial intelligence (xai) approach: Application on bearing faults diagnosis. IEEE Access 8, 134246–134256 (2020) CrossRef
123.
go back to reference Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J.R., Zurutuza, U.: Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020). IEEE Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J.R., Zurutuza, U.: Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020). IEEE
124.
go back to reference Sarp, S., Kuzlu, M., Cali, U., Elma, O., Guler, O.: An interpretable solar photovoltaic power generation forecasting approach using an explainable artificial intelligence tool. In: 2021 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), pp. 1–5 (2021). IEEE Sarp, S., Kuzlu, M., Cali, U., Elma, O., Guler, O.: An interpretable solar photovoltaic power generation forecasting approach using an explainable artificial intelligence tool. In: 2021 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), pp. 1–5 (2021). IEEE
125.
go back to reference Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: Evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019). IEEE Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: Evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019). IEEE
126.
go back to reference Rehse, J.-R., Mehdiyev, N., Fettke, P.: Towards explainable process predictions for industry 4.0 in the dfki-smart-lego-factory. KI-Künstliche Intelligenz 33(2), 181–187 (2019) Rehse, J.-R., Mehdiyev, N., Fettke, P.: Towards explainable process predictions for industry 4.0 in the dfki-smart-lego-factory. KI-Künstliche Intelligenz 33(2), 181–187 (2019)
127.
go back to reference Ferreyra, E., Hagras, H., Kern, M., Owusu, G.: Depicting decision-making: A type-2 fuzzy logic based explainable artificial intelligence system for goal-driven simulation in the workforce allocation domain. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6 (2019). IEEE Ferreyra, E., Hagras, H., Kern, M., Owusu, G.: Depicting decision-making: A type-2 fuzzy logic based explainable artificial intelligence system for goal-driven simulation in the workforce allocation domain. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6 (2019). IEEE
128.
go back to reference Shalaeva, V., Alkhoury, S., Marinescu, J., Amblard, C., Bisson, G.: Multi-operator decision trees for explainable time-series classification. In: International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 86–99 (2018). Springer Shalaeva, V., Alkhoury, S., Marinescu, J., Amblard, C., Bisson, G.: Multi-operator decision trees for explainable time-series classification. In: International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 86–99 (2018). Springer
129.
go back to reference Suh, J., Yoo, S., Park, J., Cho, S.Y., Cho, M.C., Son, H., Jeong, H.: Development and validation of an explainable artificial intelligence-based decision-supporting tool for prostate biopsy. BJU Int. 126(6), 694–703 (2020) CrossRef Suh, J., Yoo, S., Park, J., Cho, S.Y., Cho, M.C., Son, H., Jeong, H.: Development and validation of an explainable artificial intelligence-based decision-supporting tool for prostate biopsy. BJU Int. 126(6), 694–703 (2020) CrossRef
130.
go back to reference Karlsson, I., Rebane, J., Papapetrou, P., Gionis, A.: Locally and globally explainable time series tweaking. Knowl. Inf. Syst. 62(5), 1671–1700 (2020) CrossRef Karlsson, I., Rebane, J., Papapetrou, P., Gionis, A.: Locally and globally explainable time series tweaking. Knowl. Inf. Syst. 62(5), 1671–1700 (2020) CrossRef
131.
go back to reference Jung, A., Nardelli, P.H.: An information-theoretic approach to personalized explainable machine learning. IEEE Signal Process. Lett. 27, 825–829 (2020) CrossRef Jung, A., Nardelli, P.H.: An information-theoretic approach to personalized explainable machine learning. IEEE Signal Process. Lett. 27, 825–829 (2020) CrossRef
132.
go back to reference Gedikli, F., Jannach, D., Ge, M.: How should i explain? a comparison of different explanation types for recommender systems. Int. J. Hum Comput Stud. 72(4), 367–382 (2014) CrossRef Gedikli, F., Jannach, D., Ge, M.: How should i explain? a comparison of different explanation types for recommender systems. Int. J. Hum Comput Stud. 72(4), 367–382 (2014) CrossRef
135.
go back to reference Carta, S., Podda, A.S., Reforgiato Recupero, D., Stanciu, M.M.: Explainable ai for financial forecasting. In: International Conference on Machine Learning, Optimization, and Data Science, pp. 51–69 (2021). Springer Carta, S., Podda, A.S., Reforgiato Recupero, D., Stanciu, M.M.: Explainable ai for financial forecasting. In: International Conference on Machine Learning, Optimization, and Data Science, pp. 51–69 (2021). Springer
136.
go back to reference Kuiper, O., Berg, M.v.d., Burgt, J.v.d., Leijnen, S.: Exploring explainable ai in the financial sector: perspectives of banks and supervisory authorities. In: Benelux Conference on Artificial Intelligence, pp. 105–119 (2021). Springer Kuiper, O., Berg, M.v.d., Burgt, J.v.d., Leijnen, S.: Exploring explainable ai in the financial sector: perspectives of banks and supervisory authorities. In: Benelux Conference on Artificial Intelligence, pp. 105–119 (2021). Springer
137.
go back to reference He, X., Chen, T., Kan, M.-Y., Chen, X.: Trirank: Review-aware explainable recommendation by modeling aspects. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 1661–1670 (2015) He, X., Chen, T., Kan, M.-Y., Chen, X.: Trirank: Review-aware explainable recommendation by modeling aspects. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 1661–1670 (2015)
138.
go back to reference Loyola-González, O.: Understanding the criminal behavior in mexico city through an explainable artificial intelligence model. In: Mexican International Conference on Artificial Intelligence, pp. 136–149 (2019). Springer Loyola-González, O.: Understanding the criminal behavior in mexico city through an explainable artificial intelligence model. In: Mexican International Conference on Artificial Intelligence, pp. 136–149 (2019). Springer
139.
go back to reference Sarathy, N., Alsawwaf, M., Chaczko, Z.: Investigation of an innovative approach for identifying human face-profile using explainable artificial intelligence. In: 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY), pp. 155–160 (2020). IEEE Sarathy, N., Alsawwaf, M., Chaczko, Z.: Investigation of an innovative approach for identifying human face-profile using explainable artificial intelligence. In: 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY), pp. 155–160 (2020). IEEE
140.
go back to reference Callegari, C., Ducange, P., Fazzolari, M., Vecchio, M.: Explainable internet traffic classification. Appl. Sci. 11(10), 4697 (2021) CrossRef Callegari, C., Ducange, P., Fazzolari, M., Vecchio, M.: Explainable internet traffic classification. Appl. Sci. 11(10), 4697 (2021) CrossRef
141.
go back to reference Wickramasinghe, C.S., Amarasinghe, K., Marino, D.L., Rieger, C., Manic, M.: Explainable unsupervised machine learning for cyber-physical systems. IEEE Access 9, 131824–131843 (2021) CrossRef Wickramasinghe, C.S., Amarasinghe, K., Marino, D.L., Rieger, C., Manic, M.: Explainable unsupervised machine learning for cyber-physical systems. IEEE Access 9, 131824–131843 (2021) CrossRef
142.
go back to reference Solanke, A.A.: Explainable digital forensics ai: Towards mitigating distrust in ai-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation 42, 301403 (2022) Solanke, A.A.: Explainable digital forensics ai: Towards mitigating distrust in ai-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation 42, 301403 (2022)
143.
go back to reference Díaz-Rodríguez, N., Pisoni, G.: Accessible cultural heritage through explainable artificial intelligence. In: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 317–324 (2020) Díaz-Rodríguez, N., Pisoni, G.: Accessible cultural heritage through explainable artificial intelligence. In: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 317–324 (2020)
144.
go back to reference Segura, V., Brandão, B., Fucs, A., Vital Brazil, E.: Towards explainable ai using similarity: An analogues visualization system. In: International Conference on Human-Computer Interaction, pp. 389–399 (2019). Springer Segura, V., Brandão, B., Fucs, A., Vital Brazil, E.: Towards explainable ai using similarity: An analogues visualization system. In: International Conference on Human-Computer Interaction, pp. 389–399 (2019). Springer
145.
go back to reference Zhong, Q., Fan, X., Luo, X., Toni, F.: An explainable multi-attribute decision model based on argumentation. Expert Syst. Appl. 117, 42–61 (2019) CrossRef Zhong, Q., Fan, X., Luo, X., Toni, F.: An explainable multi-attribute decision model based on argumentation. Expert Syst. Appl. 117, 42–61 (2019) CrossRef
146.
go back to reference Baptista, M.L., Goebel, K., Henriques, E.M.: Relation between prognostics predictor evaluation metrics and local interpretability shap values. Artif. Intell. 306, 103667 (2022) MATHCrossRef Baptista, M.L., Goebel, K., Henriques, E.M.: Relation between prognostics predictor evaluation metrics and local interpretability shap values. Artif. Intell. 306, 103667 (2022) MATHCrossRef
147.
go back to reference Futia, G., Vetrò, A.: On the integration of knowledge graphs into deep learning models for a more comprehensible ai-three challenges for future research. Information 11(2), 122 (2020) CrossRef Futia, G., Vetrò, A.: On the integration of knowledge graphs into deep learning models for a more comprehensible ai-three challenges for future research. Information 11(2), 122 (2020) CrossRef
148.
go back to reference Tiddi, I., Schlobach, S.: Knowledge graphs as tools for explainable machine learning: A survey. Artif. Intell. 302, 103627 (2022) MATHCrossRef Tiddi, I., Schlobach, S.: Knowledge graphs as tools for explainable machine learning: A survey. Artif. Intell. 302, 103627 (2022) MATHCrossRef
149.
go back to reference Rajabi, E., Kafaie, S.: Knowledge graphs and explainable ai in healthcare. Information 13(10), 459 (2022) CrossRef Rajabi, E., Kafaie, S.: Knowledge graphs and explainable ai in healthcare. Information 13(10), 459 (2022) CrossRef
150.
go back to reference Rožanec, J.M., Zajec, P., Kenda, K., Novalija, I., Fortuna, B., Mladenić, D.: Xai-kg: knowledge graph to support xai and decision-making in manufacturing. In: International Conference on Advanced Information Systems Engineering, pp. 167–172 (2021). Springer Rožanec, J.M., Zajec, P., Kenda, K., Novalija, I., Fortuna, B., Mladenić, D.: Xai-kg: knowledge graph to support xai and decision-making in manufacturing. In: International Conference on Advanced Information Systems Engineering, pp. 167–172 (2021). Springer
151.
go back to reference Díaz-Rodríguez, N., Lamas, A., Sanchez, J., Franchi, G., Donadello, I., Tabik, S., Filliat, D., Cruz, P., Montes, R., Herrera, F.: Explainable neural-symbolic learning (x-nesyl) methodology to fuse deep learning representations with expert knowledge graphs: The monumai cultural heritage use case. Information Fusion 79, 58–83 (2022) CrossRef Díaz-Rodríguez, N., Lamas, A., Sanchez, J., Franchi, G., Donadello, I., Tabik, S., Filliat, D., Cruz, P., Montes, R., Herrera, F.: Explainable neural-symbolic learning (x-nesyl) methodology to fuse deep learning representations with expert knowledge graphs: The monumai cultural heritage use case. Information Fusion 79, 58–83 (2022) CrossRef
152.
go back to reference Bennetot, A., Franchi, G., Del Ser, J., Chatila, R., Diaz-Rodriguez, N.: Greybox xai: a neural-symbolic learning framework to produce interpretable predictions for image classification. Knowl.-Based Syst. 258, 109947 (2022) CrossRef Bennetot, A., Franchi, G., Del Ser, J., Chatila, R., Diaz-Rodriguez, N.: Greybox xai: a neural-symbolic learning framework to produce interpretable predictions for image classification. Knowl.-Based Syst. 258, 109947 (2022) CrossRef
153.
go back to reference Chen, H., Deng, S., Zhang, W., Xu, Z., Li, J., Kharlamov, E.: Neural symbolic reasoning with knowledge graphs: Knowledge extraction, relational reasoning, and inconsistency checking. Fundamental Research 1(5), 565–573 (2021) CrossRef Chen, H., Deng, S., Zhang, W., Xu, Z., Li, J., Kharlamov, E.: Neural symbolic reasoning with knowledge graphs: Knowledge extraction, relational reasoning, and inconsistency checking. Fundamental Research 1(5), 565–573 (2021) CrossRef
154.
go back to reference Sharma, D.K., Mishra, J., Singh, A., Govil, R., Srivastava, G., Lin, J.C.-W.: Explainable artificial intelligence for cybersecurity. Comput. Electr. Eng. 103, 108356 (2022) CrossRef Sharma, D.K., Mishra, J., Singh, A., Govil, R., Srivastava, G., Lin, J.C.-W.: Explainable artificial intelligence for cybersecurity. Comput. Electr. Eng. 103, 108356 (2022) CrossRef
Metadata
Title
Explainable AI: To Reveal the Logic of Black-Box Models
Authors
Chinu
Urvashi Bansal
Publication date
01-02-2023
Publisher
Ohmsha
Published in
New Generation Computing
Print ISSN: 0288-3635
Electronic ISSN: 1882-7055
DOI
https://doi.org/10.1007/s00354-022-00201-2

Premium Partner