Skip to main content
Top
Published in: Artificial Intelligence Review 5/2022

18-11-2021

Explainable artificial intelligence: a comprehensive review

Authors: Dang Minh, H. Xiang Wang, Y. Fen Li, Tan N. Nguyen

Published in: Artificial Intelligence Review | Issue 5/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Thanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered systems have brought competitive advantages, the black-box nature makes them lack transparency and prevents them from explaining their decisions. This issue has motivated the introduction of explainable artificial intelligence (XAI), which promotes AI algorithms that can show their internal process and explain how they made decisions. The number of XAI research has increased significantly in recent years, but there lacks a unified and comprehensive review of the latest XAI progress. This review aims to bridge the gap by discovering the critical perspectives of the rapidly growing body of research associated with XAI. After offering the readers a solid XAI background, we analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-modeling explainability. We also pay attention to the current methods that dedicate to interpret and analyze deep learning methods. In addition, we systematically discuss various XAI challenges, such as the trade-off between the performance and the explainability, evaluation methods, security, and policy. Finally, we show the standard approaches that are leveraged to deal with the mentioned challenges.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
go back to reference Abdollahi B, Nasraoui O (2018) Transparency in fair machine learning: the case of explainable recommender systems. In: Human and machine learning. Springer, Berlin, pp 21?35 Abdollahi B, Nasraoui O (2018) Transparency in fair machine learning: the case of explainable recommender systems. In: Human and machine learning. Springer, Berlin, pp 21?35
go back to reference Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138?52160CrossRef Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138?52160CrossRef
go back to reference Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. Adv Neural Inf Process Syst 31:9505?9515 Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. Adv Neural Inf Process Syst 31:9505?9515
go back to reference Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54(1):95?122CrossRef Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54(1):95?122CrossRef
go back to reference Adriana da Costa FC, Vellasco MMB, Tanscheit R (2013) Fuzzy rules extraction from support vector machines for multi-class classification. Neural Comput Appl 22(7):1571?1580CrossRef Adriana da Costa FC, Vellasco MMB, Tanscheit R (2013) Fuzzy rules extraction from support vector machines for multi-class classification. Neural Comput Appl 22(7):1571?1580CrossRef
go back to reference Ahn Y, Lin YR (2019) Fairsight: visual analytics for fairness in decision making. IEEE Trans Vis Comput Graph 26(1):1086?1095 Ahn Y, Lin YR (2019) Fairsight: visual analytics for fairness in decision making. IEEE Trans Vis Comput Graph 26(1):1086?1095
go back to reference Akula AR, Todorovic S, Chai JY, Zhu SC (2019) Natural language interaction with explainable AI models. In: CVPR workshops, pp 87?90 Akula AR, Todorovic S, Chai JY, Zhu SC (2019) Natural language interaction with explainable AI models. In: CVPR workshops, pp 87?90
go back to reference Al-Shedivat M, Dubey A, Xing E (2020) Contextual explanation networks. J Mach Learn Res 21(194):1?44MathSciNetMATH Al-Shedivat M, Dubey A, Xing E (2020) Contextual explanation networks. J Mach Learn Res 21(194):1?44MathSciNetMATH
go back to reference Angelov P, Soares E (2020) Towards explainable deep neural networks (xDNN). Neural Netw 130:185?194CrossRef Angelov P, Soares E (2020) Towards explainable deep neural networks (xDNN). Neural Netw 130:185?194CrossRef
go back to reference Anysz H, Zbiciak A, Ibadov N (2016) The influence of input data standardization method on prediction accuracy of artificial neural networks. Proc Eng 153:66?70CrossRef Anysz H, Zbiciak A, Ibadov N (2016) The influence of input data standardization method on prediction accuracy of artificial neural networks. Proc Eng 153:66?70CrossRef
go back to reference Arras L, Arjona-Medina J, Widrich M, Montavon G (2019) Explaining and interpreting lstms. In: Explainable AI: interpreting, explaining and visualizing deep learning, vol 11700, p 211 Arras L, Arjona-Medina J, Widrich M, Montavon G (2019) Explaining and interpreting lstms. In: Explainable AI: interpreting, explaining and visualizing deep learning, vol 11700, p 211
go back to reference Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82?115CrossRef Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82?115CrossRef
go back to reference Asadi S, Nilashi M, Husin ARC, Yadegaridehkordi E (2017) Customers perspectives on adoption of cloud computing in banking sector. Inf Technol Manag 18(4):305?330CrossRef Asadi S, Nilashi M, Husin ARC, Yadegaridehkordi E (2017) Customers perspectives on adoption of cloud computing in banking sector. Inf Technol Manag 18(4):305?330CrossRef
go back to reference Assaf R, Giurgiu I, Bagehorn F, Schumann A (2019) Mtex-cnn: Multivariate time series explanations for predictions with convolutional neural networks. In: 2019 IEEE international conference on data mining (ICDM). IEEE, pp 952?957 Assaf R, Giurgiu I, Bagehorn F, Schumann A (2019) Mtex-cnn: Multivariate time series explanations for predictions with convolutional neural networks. In: 2019 IEEE international conference on data mining (ICDM). IEEE, pp 952?957
go back to reference Bang JS, Lee MH, Fazli S, Guan C, Lee SW (2021) Spatio-spectral feature representation for motor imagery classification using convolutional neural networks. IEEE Trans Neural Netw Learn Syst Bang JS, Lee MH, Fazli S, Guan C, Lee SW (2021) Spatio-spectral feature representation for motor imagery classification using convolutional neural networks. IEEE Trans Neural Netw Learn Syst
go back to reference Baniecki H, Biecek P (2019) modelStudio: Interactive studio with explanations for ML predictive models. J Open Source Softw 4(43):1798CrossRef Baniecki H, Biecek P (2019) modelStudio: Interactive studio with explanations for ML predictive models. J Open Source Softw 4(43):1798CrossRef
go back to reference Baron B, Musolesi M (2020) Interpretable machine learning for privacy-preserving pervasive systems. IEEE Pervasive Comput Baron B, Musolesi M (2020) Interpretable machine learning for privacy-preserving pervasive systems. IEEE Pervasive Comput
go back to reference Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6541?6549 Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6541?6549
go back to reference Bender EM, Friedman B (2018) Data statements for natural language processing: toward mitigating system bias and enabling better science. Trans Assoc Comput Linguist 6:587?604CrossRef Bender EM, Friedman B (2018) Data statements for natural language processing: toward mitigating system bias and enabling better science. Trans Assoc Comput Linguist 6:587?604CrossRef
go back to reference Bi X, Zhang C, He Y, Zhao X, Sun Y, Ma Y (2021) Explainable time?frequency convolutional neural network for microseismic waveform classification. Inf Sci 546:883?896MathSciNetMATHCrossRef Bi X, Zhang C, He Y, Zhao X, Sun Y, Ma Y (2021) Explainable time?frequency convolutional neural network for microseismic waveform classification. Inf Sci 546:883?896MathSciNetMATHCrossRef
go back to reference Blanco-Justicia A, Domingo-Ferrer J, Martínez S, Sánchez D (2020) Machine learning explainability via microaggregation and shallow decision trees. Knowl-Based Syst 194:105532CrossRef Blanco-Justicia A, Domingo-Ferrer J, Martínez S, Sánchez D (2020) Machine learning explainability via microaggregation and shallow decision trees. Knowl-Based Syst 194:105532CrossRef
go back to reference Bologna G (2019) A simple convolutional neural network with rule extraction. Appl Sci 9(12):2411CrossRef Bologna G (2019) A simple convolutional neural network with rule extraction. Appl Sci 9(12):2411CrossRef
go back to reference Butterworth M (2018) The ICO and artificial intelligence: the role of fairness in the GDPR framework. Comput Law Secur Rev 34(2):257?268CrossRef Butterworth M (2018) The ICO and artificial intelligence: the role of fairness in the GDPR framework. Comput Law Secur Rev 34(2):257?268CrossRef
go back to reference Campbell T, Broderick T (2019) Automated scalable Bayesian inference via Hilbert coresets. J Mach Learn Res 20(1):551?588MathSciNetMATH Campbell T, Broderick T (2019) Automated scalable Bayesian inference via Hilbert coresets. J Mach Learn Res 20(1):551?588MathSciNetMATH
go back to reference Cao HE, Sarlin R, Jung A (2020) Learning explainable decision rules via maximum satisfiability. IEEE Access 8:218180?218185CrossRef Cao HE, Sarlin R, Jung A (2020) Learning explainable decision rules via maximum satisfiability. IEEE Access 8:218180?218185CrossRef
go back to reference Carey P (2018) Data protection: a practical guide to UK and EU law. Oxford University Press, Inc, Oxford Carey P (2018) Data protection: a practical guide to UK and EU law. Oxford University Press, Inc, Oxford
go back to reference Carter S, Armstrong Z, Schubert L, Johnson I, Olah C (2019) Activation atlas. Distill 4(3):e15CrossRef Carter S, Armstrong Z, Schubert L, Johnson I, Olah C (2019) Activation atlas. Distill 4(3):e15CrossRef
go back to reference Carvalho DV, Pereira EM, Cardoso JS (2019a) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832CrossRef Carvalho DV, Pereira EM, Cardoso JS (2019a) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832CrossRef
go back to reference Carvalho DV, Pereira EM, Cardoso JS (2019b) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832CrossRef Carvalho DV, Pereira EM, Cardoso JS (2019b) Machine learning interpretability: a survey on methods and metrics. Electronics 8(8):832CrossRef
go back to reference Ceni A, Ashwin P, Livi L (2020) Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn Comput 12(2):330?356CrossRef Ceni A, Ashwin P, Livi L (2020) Interpreting recurrent neural networks behaviour via excitable network attractors. Cogn Comput 12(2):330?356CrossRef
go back to reference Chakraborty S, Tomsett R, Raghavendra R, Harborne D, Alzantot M, Cerutti F, Srivastava M, Preece A, Julier S, Rao RM et al (2017) Interpretability of deep learning models: a survey of results. In: 2017 IEEE SmartWorld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, pp 1?6 Chakraborty S, Tomsett R, Raghavendra R, Harborne D, Alzantot M, Cerutti F, Srivastava M, Preece A, Julier S, Rao RM et al (2017) Interpretability of deep learning models: a survey of results. In: 2017 IEEE SmartWorld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, pp 1?6
go back to reference Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017?5032MathSciNetMATHCrossRef Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2015) PCANet: a simple deep learning baseline for image classification? IEEE Trans Image Process 24(12):5017?5032MathSciNetMATHCrossRef
go back to reference Chen J, Song L, Wainwright MJ, Jordan MI (2018) L-shapley and c-shapley: efficient model interpretation for structured data. In: International conference on learning representations Chen J, Song L, Wainwright MJ, Jordan MI (2018) L-shapley and c-shapley: efficient model interpretation for structured data. In: International conference on learning representations
go back to reference Chen J, Vaughan J, Nair V, Sudjianto A (2020a) Adaptive explainable neural networks (AxNNs). Available at SSRN 3569318 Chen J, Vaughan J, Nair V, Sudjianto A (2020a) Adaptive explainable neural networks (AxNNs). Available at SSRN 3569318
go back to reference Chen Y, Yu C, Liu X, Xi T, Xu G, Sun Y, Zhu F, Shen B (2020b) PCLiON: an ontology for data standardization and sharing of prostate cancer associated lifestyles. Int J Med Inform 145:104332CrossRef Chen Y, Yu C, Liu X, Xi T, Xu G, Sun Y, Zhu F, Shen B (2020b) PCLiON: an ontology for data standardization and sharing of prostate cancer associated lifestyles. Int J Med Inform 145:104332CrossRef
go back to reference Chen H, Lundberg S, Lee SI (2021) Explaining models by propagating Shapley values of local components. In: Explainable AI in Healthcare and Medicine. Springer, Berlin, pp 261?270 Chen H, Lundberg S, Lee SI (2021) Explaining models by propagating Shapley values of local components. In: Explainable AI in Healthcare and Medicine. Springer, Berlin, pp 261?270
go back to reference Choi E, Bahadori MT, Kulas JA, Schuetz A, Stewart WF, Sun J (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In: Advances in Neural Information Processing Systems, pp 3512?3520 Choi E, Bahadori MT, Kulas JA, Schuetz A, Stewart WF, Sun J (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In: Advances in Neural Information Processing Systems, pp 3512?3520
go back to reference Choi KS, Choi SH, Jeong B (2019) Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro Oncol 21(9):1197?1209CrossRef Choi KS, Choi SH, Jeong B (2019) Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro Oncol 21(9):1197?1209CrossRef
go back to reference Choi H, Som A, Turaga P (2020) AMC-loss: angular margin contrastive loss for improved explainability in image classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 838?839 Choi H, Som A, Turaga P (2020) AMC-loss: angular margin contrastive loss for improved explainability in image classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 838?839
go back to reference Choo J, Liu S (2018) Visual analytics for explainable deep learning. IEEE Comput Graph Appl 38(4):84?92CrossRef Choo J, Liu S (2018) Visual analytics for explainable deep learning. IEEE Comput Graph Appl 38(4):84?92CrossRef
go back to reference Comizio VG, Petrasic KL, Lee HY (2011) Regulators take steps to eliminate differences in thrift, bank and holding company reporting requirements. Banking LJ 128:426 Comizio VG, Petrasic KL, Lee HY (2011) Regulators take steps to eliminate differences in thrift, bank and holding company reporting requirements. Banking LJ 128:426
go back to reference Cortez P, Embrechts MJ (2013) Using sensitivity analysis and visualization techniques to open black box data mining models. Inf Sci 225:1?17CrossRef Cortez P, Embrechts MJ (2013) Using sensitivity analysis and visualization techniques to open black box data mining models. Inf Sci 225:1?17CrossRef
go back to reference Craven MW, Shavlik JW (2014) Learning symbolic rules using artificial neural networks. In: Proceedings of the tenth international conference on machine learning, pp 73?80 Craven MW, Shavlik JW (2014) Learning symbolic rules using artificial neural networks. In: Proceedings of the tenth international conference on machine learning, pp 73?80
go back to reference Daglarli E (2020) Explainable artificial intelligence (XAI) approaches and deep meta-learning models. In: Advances and applications in deep learning, p 79 Daglarli E (2020) Explainable artificial intelligence (XAI) approaches and deep meta-learning models. In: Advances and applications in deep learning, p 79
go back to reference Dai J, Chen C, Li Y (2019) A backdoor attack against lstm-based text classification systems. IEEE Access 7:138872?138878CrossRef Dai J, Chen C, Li Y (2019) A backdoor attack against lstm-based text classification systems. IEEE Access 7:138872?138878CrossRef
go back to reference Dang LM, Hassan SI, Im S, Mehmood I, Moon H (2018) Utilizing text recognition for the defects extraction in sewers CCTV inspection videos. Comput Ind 99:96?109CrossRef Dang LM, Hassan SI, Im S, Mehmood I, Moon H (2018) Utilizing text recognition for the defects extraction in sewers CCTV inspection videos. Comput Ind 99:96?109CrossRef
go back to reference Dang LM, Piran M, Han D, Min K, Moon H et al (2019) A survey on internet of things and cloud computing for healthcare. Electronics 8(7):768CrossRef Dang LM, Piran M, Han D, Min K, Moon H et al (2019) A survey on internet of things and cloud computing for healthcare. Electronics 8(7):768CrossRef
go back to reference De T, Giri P, Mevawala A, Nemani R, Deo A (2020) Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput Sci 168:40?48CrossRef De T, Giri P, Mevawala A, Nemani R, Deo A (2020) Explainable AI: a hybrid approach to generate human-interpretable explanation for deep learning prediction. Procedia Comput Sci 168:40?48CrossRef
go back to reference Deeks A (2019) The judicial demand for explainable artificial intelligence. Columbia Law Rev 119(7):1829?1850 Deeks A (2019) The judicial demand for explainable artificial intelligence. Columbia Law Rev 119(7):1829?1850
go back to reference Deleforge A, Forbes F, Horaud R (2015) High-dimensional regression with gaussian mixtures and partially-latent response variables. Stat Comput 25(5):893?911MathSciNetMATHCrossRef Deleforge A, Forbes F, Horaud R (2015) High-dimensional regression with gaussian mixtures and partially-latent response variables. Stat Comput 25(5):893?911MathSciNetMATHCrossRef
go back to reference Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277?287CrossRef Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277?287CrossRef
go back to reference Dibia V, Demiralp Ç (2019) Data2vis: automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. IEEE Comput Graph Appl 39(5):33?46CrossRef Dibia V, Demiralp Ç (2019) Data2vis: automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. IEEE Comput Graph Appl 39(5):33?46CrossRef
go back to reference Ding L (2018) Human knowledge in constructing AI systems?neural logic networks approach towards an explainable AI. Procedia Comput Sci 126:1561?1570CrossRef Ding L (2018) Human knowledge in constructing AI systems?neural logic networks approach towards an explainable AI. Procedia Comput Sci 126:1561?1570CrossRef
go back to reference Dingen D, van?t Veer M, Houthuizen P, Mestrom EH, Korsten EH, Bouwman AR, Van Wijk J (2018) Regressionexplorer: interactive exploration of logistic regression models with subgroup analysis. IEEE Trans Vis Comput Graph 25(1):246?255 Dingen D, van?t Veer M, Houthuizen P, Mestrom EH, Korsten EH, Bouwman AR, Van Wijk J (2018) Regressionexplorer: interactive exploration of logistic regression models with subgroup analysis. IEEE Trans Vis Comput Graph 25(1):246?255
go back to reference Dogra DP, Ahmed A, Bhaskar H (2016) Smart video summarization using mealy machine-based trajectory modelling for surveillance applications. Multimed Tools Appl 75(11):6373?6401CrossRef Dogra DP, Ahmed A, Bhaskar H (2016) Smart video summarization using mealy machine-based trajectory modelling for surveillance applications. Multimed Tools Appl 75(11):6373?6401CrossRef
go back to reference Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:171000794 Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:171000794
go back to reference DuMouchel W (2002) Data squashing: constructing summary data sets. In: Handbook of massive data sets. Springer, Cham, pp 579?591 DuMouchel W (2002) Data squashing: constructing summary data sets. In: Handbook of massive data sets. Springer, Cham, pp 579?591
go back to reference Dunn C, Moustafa N, Turnbull B (2020) Robustness evaluations of sustainable machine learning models against data poisoning attacks in the internet of things. Sustainability 12(16):6434CrossRef Dunn C, Moustafa N, Turnbull B (2020) Robustness evaluations of sustainable machine learning models against data poisoning attacks in the internet of things. Sustainability 12(16):6434CrossRef
go back to reference Dziugaite GK, Ben-David S, Roy DM (2020) Enforcing interpretability and its statistical impacts: trade-offs between accuracy and interpretability. arXiv preprint arXiv:201013764 Dziugaite GK, Ben-David S, Roy DM (2020) Enforcing interpretability and its statistical impacts: trade-offs between accuracy and interpretability. arXiv preprint arXiv:201013764
go back to reference Eiras-Franco C, Guijarro-Berdiñas B, Alonso-Betanzos A, Bahamonde A (2019) A scalable decision-tree-based method to explain interactions in dyadic data. Decis Support Syst 127:113141MATHCrossRef Eiras-Franco C, Guijarro-Berdiñas B, Alonso-Betanzos A, Bahamonde A (2019) A scalable decision-tree-based method to explain interactions in dyadic data. Decis Support Syst 127:113141MATHCrossRef
go back to reference Elshawi R, Al-Mallah MH, Sakr S (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 19(1):1?32CrossRef Elshawi R, Al-Mallah MH, Sakr S (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 19(1):1?32CrossRef
go back to reference Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121?134CrossRef Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121?134CrossRef
go back to reference Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, van Gerven M, van Lier R (2018) Explainable and interpretable models in computer vision and machine learning. Springer, ChamCrossRef Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, van Gerven M, van Lier R (2018) Explainable and interpretable models in computer vision and machine learning. Springer, ChamCrossRef
go back to reference Escobar CA, Morales-Menendez R (2019) Process-monitoring-for-quality?a model selection criterion for support vector machine. Procedia Manuf 34:1010?1017CrossRef Escobar CA, Morales-Menendez R (2019) Process-monitoring-for-quality?a model selection criterion for support vector machine. Procedia Manuf 34:1010?1017CrossRef
go back to reference Fang X, Xu Y, Li X, Lai Z, Wong WK, Fang B (2017) Regularized label relaxation linear regression. IEEE Trans Neural Netwo Learn Syst 29(4):1006?1018CrossRef Fang X, Xu Y, Li X, Lai Z, Wong WK, Fang B (2017) Regularized label relaxation linear regression. IEEE Trans Neural Netwo Learn Syst 29(4):1006?1018CrossRef
go back to reference Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Robots and transparency: the multiple dimensions of transparency in the context of robot technologies. IEEE Robotics Autom Mag 26(2):71?78CrossRef Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Robots and transparency: the multiple dimensions of transparency in the context of robot technologies. IEEE Robotics Autom Mag 26(2):71?78CrossRef
go back to reference Fernandez A, Herrera F, Cordon O, del Jesus MJ, Marcelloni F (2019) Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to? IEEE Comput Intell Mag 14(1):69?81CrossRef Fernandez A, Herrera F, Cordon O, del Jesus MJ, Marcelloni F (2019) Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to? IEEE Comput Intell Mag 14(1):69?81CrossRef
go back to reference Forte JC, Mungroop HE, de Geus F, van der Grinten ML, Bouma HR, Pettilä V, Scheeren TW, Nijsten MW, Mariani MA, van der Horst IC et al (2021) Ensemble machine learning prediction and variable importance analysis of 5-year mortality after cardiac valve and CABG operations. Sci Rep 11(1):1?11 Forte JC, Mungroop HE, de Geus F, van der Grinten ML, Bouma HR, Pettilä V, Scheeren TW, Nijsten MW, Mariani MA, van der Horst IC et al (2021) Ensemble machine learning prediction and variable importance analysis of 5-year mortality after cardiac valve and CABG operations. Sci Rep 11(1):1?11
go back to reference Främling K (2020) Decision theory meets explainable AI. In: International workshop on explainable, transparent autonomous agents and multi-agent systems. Springer, Cham, pp 57?74 Främling K (2020) Decision theory meets explainable AI. In: International workshop on explainable, transparent autonomous agents and multi-agent systems. Springer, Cham, pp 57?74
go back to reference Gallego AJ, Calvo-Zaragoza J, Valero-Mas JJ, Rico-Juan JR (2018) Clustering-based k-nearest neighbor classification for large-scale data with neural codes representation. Pattern Recogn 74:531?543CrossRef Gallego AJ, Calvo-Zaragoza J, Valero-Mas JJ, Rico-Juan JR (2018) Clustering-based k-nearest neighbor classification for large-scale data with neural codes representation. Pattern Recogn 74:531?543CrossRef
go back to reference Gaonkar B, Shinohara RT, Davatzikos C, Initiative ADN et al (2015) Interpreting support vector machine models for multivariate group wise analysis in neuroimaging. Med Image Anal 24(1):190?204CrossRef Gaonkar B, Shinohara RT, Davatzikos C, Initiative ADN et al (2015) Interpreting support vector machine models for multivariate group wise analysis in neuroimaging. Med Image Anal 24(1):190?204CrossRef
go back to reference García-Magariño I, Muttukrishnan R, Lloret J (2019) Human-centric AI for trustworthy IoT systems with explainable multilayer perceptrons. IEEE Access 7:125562?125574CrossRef García-Magariño I, Muttukrishnan R, Lloret J (2019) Human-centric AI for trustworthy IoT systems with explainable multilayer perceptrons. IEEE Access 7:125562?125574CrossRef
go back to reference Ghorbani A, Abid A, Zou J (2019) Interpretation of neural networks is fragile. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 3681?3688 Ghorbani A, Abid A, Zou J (2019) Interpretation of neural networks is fragile. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 3681?3688
go back to reference Gite S, Khatavkar H, Kotecha K, Srivastava S, Maheshwari P, Pandey N (2021) Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Comput Sci 7:e340CrossRef Gite S, Khatavkar H, Kotecha K, Srivastava S, Maheshwari P, Pandey N (2021) Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Comput Sci 7:e340CrossRef
go back to reference Gronauer S, Diepold K (2021) Multi-agent deep reinforcement learning: a survey. Artif Intell Rev 1?49 Gronauer S, Diepold K (2021) Multi-agent deep reinforcement learning: a survey. Artif Intell Rev 1?49
go back to reference Gu D, Su K, Zhao H (2020a) A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif Intell Med 107:101858CrossRef Gu D, Su K, Zhao H (2020a) A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif Intell Med 107:101858CrossRef
go back to reference Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Ourselin S, Vercauteren T, Zhang S (2020b) Ca-net: comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans Med Imaging Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Ourselin S, Vercauteren T, Zhang S (2020b) Ca-net: comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans Med Imaging
go back to reference Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2019) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):93CrossRef Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2019) A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 51(5):93CrossRef
go back to reference Gulati P, Hu Q, Atashzar SF (2021) Toward deep generalization of peripheral EMG-based human-robot interfacing: a hybrid explainable solution for neurorobotic systems. IEEE Robotics Autom Lett Gulati P, Hu Q, Atashzar SF (2021) Toward deep generalization of peripheral EMG-based human-robot interfacing: a hybrid explainable solution for neurorobotic systems. IEEE Robotics Autom Lett
go back to reference Guo S, Yu J, Liu X, Wang C, Jiang Q (2019) A predicting model for properties of steel using the industrial big data based on machine learning. Comput Mater Sci 160:95?104CrossRef Guo S, Yu J, Liu X, Wang C, Jiang Q (2019) A predicting model for properties of steel using the industrial big data based on machine learning. Comput Mater Sci 160:95?104CrossRef
go back to reference Guo W (2020) Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun Mag 58(6):39?45CrossRef Guo W (2020) Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun Mag 58(6):39?45CrossRef
go back to reference Gupta B, Rawat A, Jain A, Arora A, Dhami N (2017) Analysis of various decision tree algorithms for classification in data mining. Int J Comput Appl 163(8):15?19 Gupta B, Rawat A, Jain A, Arora A, Dhami N (2017) Analysis of various decision tree algorithms for classification in data mining. Int J Comput Appl 163(8):15?19
go back to reference H2oai (2017) Comparative performance analysis of neural networks architectures on h2o platform for various activation functions. In: 2017 IEEE International young scientists forum on applied physics and engineering (YSF). IEEE, pp 70?73 H2oai (2017) Comparative performance analysis of neural networks architectures on h2o platform for various activation functions. In: 2017 IEEE International young scientists forum on applied physics and engineering (YSF). IEEE, pp 70?73
go back to reference Haasdonk B (2005) Feature space interpretation of SVMs with indefinite kernels. IEEE Trans Pattern Anal Mach Intell 27(4):482?492CrossRef Haasdonk B (2005) Feature space interpretation of SVMs with indefinite kernels. IEEE Trans Pattern Anal Mach Intell 27(4):482?492CrossRef
go back to reference Hagras H (2018) Toward human-understandable, explainable AI. Computer 51(9):28?36CrossRef Hagras H (2018) Toward human-understandable, explainable AI. Computer 51(9):28?36CrossRef
go back to reference Hara S, Hayashi K (2018) Making tree ensembles interpretable: a Bayesian model selection approach. In: International conference on artificial intelligence and statistics. PMLR, pp 77?85 Hara S, Hayashi K (2018) Making tree ensembles interpretable: a Bayesian model selection approach. In: International conference on artificial intelligence and statistics. PMLR, pp 77?85
go back to reference Hatwell J, Gaber MM, Azad RMA (2020) Chirps: explaining random forest classification. Artif Intell Rev 53:5747?5788CrossRef Hatwell J, Gaber MM, Azad RMA (2020) Chirps: explaining random forest classification. Artif Intell Rev 53:5747?5788CrossRef
go back to reference Hatzilygeroudis I, Prentzas J (2015) Symbolic-neural rule based reasoning and explanation. Expert Syst Appl 42(9):4595?4609CrossRef Hatzilygeroudis I, Prentzas J (2015) Symbolic-neural rule based reasoning and explanation. Expert Syst Appl 42(9):4595?4609CrossRef
go back to reference Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European conference on computer vision. Springer, Cham, pp 3?19 Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European conference on computer vision. Springer, Cham, pp 3?19
go back to reference Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Disc 28(5):1503?1529MathSciNetCrossRef Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Disc 28(5):1503?1529MathSciNetCrossRef
go back to reference Hind M, Wei D, Campbell M, Codella NC, Dhurandhar A, Mojsilovi? A, Natesan Ramamurthy K, Varshney KR (2019) TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 123?129 Hind M, Wei D, Campbell M, Codella NC, Dhurandhar A, Mojsilovi? A, Natesan Ramamurthy K, Varshney KR (2019) TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 123?129
go back to reference Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:181204608 Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:181204608
go back to reference Holzinger A (2016) Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform 3(2):119?131CrossRef Holzinger A (2016) Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform 3(2):119?131CrossRef
go back to reference Holzinger A, Langs G, Denk H, Zatloukal K, Müller H (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov 9(4):e1312CrossRef Holzinger A, Langs G, Denk H, Zatloukal K, Müller H (2019) Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov 9(4):e1312CrossRef
go back to reference Holzinger A, Malle B, Saranti A, Pfeifer B (2021a) Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf Fusion 71:28?37CrossRef Holzinger A, Malle B, Saranti A, Pfeifer B (2021a) Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf Fusion 71:28?37CrossRef
go back to reference Holzinger A, Weippl E, Tjoa AM, Kieseberg P (2021b) Digital transformation for sustainable development goals (SDGS)?a security, safety and privacy perspective on AI. In: International cross-domain conference for machine learning and knowledge. Springer, Cham, pp 103?107 Holzinger A, Weippl E, Tjoa AM, Kieseberg P (2021b) Digital transformation for sustainable development goals (SDGS)?a security, safety and privacy perspective on AI. In: International cross-domain conference for machine learning and knowledge. Springer, Cham, pp 103?107
go back to reference Hu K, Orghian D, Hidalgo C (2018a) Dive: a mixed-initiative system supporting integrated data exploration workflows. In: Proceedings of the workshop on human-in-the-loop data analytics, pp 1?7 Hu K, Orghian D, Hidalgo C (2018a) Dive: a mixed-initiative system supporting integrated data exploration workflows. In: Proceedings of the workshop on human-in-the-loop data analytics, pp 1?7
go back to reference Hu R, Andreas J, Darrell T, Saenko K (2018b) Explainable neural computation via stack neural module networks. In: Proceedings of the European conference on computer vision (ECCV), pp 53?69 Hu R, Andreas J, Darrell T, Saenko K (2018b) Explainable neural computation via stack neural module networks. In: Proceedings of the European conference on computer vision (ECCV), pp 53?69
go back to reference Huang Q, Katsman I, He H, Gu Z, Belongie S, Lim SN (2019) Enhancing adversarial example transferability with an intermediate level attack. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4733?4742 Huang Q, Katsman I, He H, Gu Z, Belongie S, Lim SN (2019) Enhancing adversarial example transferability with an intermediate level attack. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4733?4742
go back to reference Huisman M, van Rijn JN, Plaat A (2021) A survey of deep meta-learning. Artif Intell Rev 1?59 Huisman M, van Rijn JN, Plaat A (2021) A survey of deep meta-learning. Artif Intell Rev 1?59
go back to reference IBM (2019) AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):4?1 IBM (2019) AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):4?1
go back to reference Islam MA, Anderson DT, Pinar AJ, Havens TC, Scott G, Keller JM (2019) Enabling explainable fusion in deep learning with fuzzy integral neural networks. IEEE Trans Fuzzy Syst 28(7):1291?1300CrossRef Islam MA, Anderson DT, Pinar AJ, Havens TC, Scott G, Keller JM (2019) Enabling explainable fusion in deep learning with fuzzy integral neural networks. IEEE Trans Fuzzy Syst 28(7):1291?1300CrossRef
go back to reference Islam NU, Lee S (2019) Interpretation of deep CNN based on learning feature reconstruction with feedback weights. IEEE Access 7:25195?25208CrossRef Islam NU, Lee S (2019) Interpretation of deep CNN based on learning feature reconstruction with feedback weights. IEEE Access 7:25195?25208CrossRef
go back to reference Ivanovs M, Kadikis R, Ozols K (2021) Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit Lett Ivanovs M, Kadikis R, Ozols K (2021) Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit Lett
go back to reference Jagadish H, Gehrke J, Labrinidis A, Papakonstantinou Y, Patel JM, Ramakrishnan R, Shahabi C (2014) Big data and its technical challenges. Commun ACM 57(7):86?94CrossRef Jagadish H, Gehrke J, Labrinidis A, Papakonstantinou Y, Patel JM, Ramakrishnan R, Shahabi C (2014) Big data and its technical challenges. Commun ACM 57(7):86?94CrossRef
go back to reference Janitza S, Celik E, Boulesteix AL (2018) A computationally fast variable importance test for random forests for high-dimensional data. Adv Data Anal Classif 12(4):885?915MathSciNetMATHCrossRef Janitza S, Celik E, Boulesteix AL (2018) A computationally fast variable importance test for random forests for high-dimensional data. Adv Data Anal Classif 12(4):885?915MathSciNetMATHCrossRef
go back to reference Jung YJ, Han SH, Choi HJ (2021) Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access 9:18670?18681CrossRef Jung YJ, Han SH, Choi HJ (2021) Explaining CNN and RNN using selective layer-wise relevance propagation. IEEE Access 9:18670?18681CrossRef
go back to reference Junior JRB (2020) Graph embedded rules for explainable predictions in data streams. Neural Netw 129:174?192CrossRef Junior JRB (2020) Graph embedded rules for explainable predictions in data streams. Neural Netw 129:174?192CrossRef
go back to reference Juuti M, Szyller S, Marchal S, Asokan N (2019) PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512?527 Juuti M, Szyller S, Marchal S, Asokan N (2019) PRADA: protecting against DNN model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512?527
go back to reference Kapelner A, Soterwood J, Nessaiver S, Adlof S (2018) Predicting contextual informativeness for vocabulary learning. IEEE Trans Learn Technol 11(1):13?26CrossRef Kapelner A, Soterwood J, Nessaiver S, Adlof S (2018) Predicting contextual informativeness for vocabulary learning. IEEE Trans Learn Technol 11(1):13?26CrossRef
go back to reference Karlsson I, Rebane J, Papapetrou P, Gionis A (2020) Locally and globally explainable time series tweaking. Knowl Inf Syst 62(5):1671?1700CrossRef Karlsson I, Rebane J, Papapetrou P, Gionis A (2020) Locally and globally explainable time series tweaking. Knowl Inf Syst 62(5):1671?1700CrossRef
go back to reference Keane MT, Kenny EM (2019) How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: International conference on case-based reasoning. Springer, Cham, pp 155?171 Keane MT, Kenny EM (2019) How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: International conference on case-based reasoning. Springer, Cham, pp 155?171
go back to reference Keneni BM, Kaur D, Al Bataineh A, Devabhaktuni VK, Javaid AY, Zaientz JD, Marinier RP (2019) Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7:17001?17016CrossRef Keneni BM, Kaur D, Al Bataineh A, Devabhaktuni VK, Javaid AY, Zaientz JD, Marinier RP (2019) Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7:17001?17016CrossRef
go back to reference Kenny EM, Ford C, Quinn M, Keane MT (2021) Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif Intell 294:103459MathSciNetMATHCrossRef Kenny EM, Ford C, Quinn M, Keane MT (2021) Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif Intell 294:103459MathSciNetMATHCrossRef
go back to reference Kim J, Canny J (2018) Explainable deep driving by visualizing causal attention. In: Explainable and interpretable models in computer vision and machine learning. Springer, Cham, pp 173?193 Kim J, Canny J (2018) Explainable deep driving by visualizing causal attention. In: Explainable and interpretable models in computer vision and machine learning. Springer, Cham, pp 173?193
go back to reference Kindermans PJ, Hooker S, Adebayo J, Alber M, Schütt KT, Dähne S, Erhan D, Kim B (2019) The (un) reliability of saliency methods. In: Explainable AI: interpreting, explaining and visualizing deep learning. Springer, Cham, pp 267?280 Kindermans PJ, Hooker S, Adebayo J, Alber M, Schütt KT, Dähne S, Erhan D, Kim B (2019) The (un) reliability of saliency methods. In: Explainable AI: interpreting, explaining and visualizing deep learning. Springer, Cham, pp 267?280
go back to reference Kiritz N, Sarfati P (2018) Supervisory guidance on model risk management (SR 11-7) versus enterprise-wide model risk management for deposit-taking institutions (E-23): a detailed comparative analysis. Available at SSRN 3332484 Kiritz N, Sarfati P (2018) Supervisory guidance on model risk management (SR 11-7) versus enterprise-wide model risk management for deposit-taking institutions (E-23): a detailed comparative analysis. Available at SSRN 3332484
go back to reference Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR, pp 1885?1894 Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR, pp 1885?1894
go back to reference Kolyshkina I, Simoff S (2021) Interpretability of machine learning solutions in public healthcare: the CRISP-ML approach. Front Big Data 4:18CrossRef Kolyshkina I, Simoff S (2021) Interpretability of machine learning solutions in public healthcare: the CRISP-ML approach. Front Big Data 4:18CrossRef
go back to reference Konig R, Johansson U, Niklasson L (2008) G-REX: a versatile framework for evolutionary data mining. In: 2008 IEEE international conference on data mining workshops. IEEE, pp 971?974 Konig R, Johansson U, Niklasson L (2008) G-REX: a versatile framework for evolutionary data mining. In: 2008 IEEE international conference on data mining workshops. IEEE, pp 971?974
go back to reference Konstantinov AV, Utkin LV (2021) Interpretable machine learning with an ensemble of gradient boosting machines. Knowl Based Syst 222:106993CrossRef Konstantinov AV, Utkin LV (2021) Interpretable machine learning with an ensemble of gradient boosting machines. Knowl Based Syst 222:106993CrossRef
go back to reference Krishnamurthy P, Sarmadi A, Khorrami F (2021) Explainable classification by learning human-readable sentences in feature subsets. Inf Sci 564:202?219CrossRef Krishnamurthy P, Sarmadi A, Khorrami F (2021) Explainable classification by learning human-readable sentences in feature subsets. Inf Sci 564:202?219CrossRef
go back to reference Kumari B, Swarnkar T (2020) Importance of data standardization methods on stock indices prediction accuracy. In: Advanced computing and intelligent engineering. Springer, Cham, pp 309?318 Kumari B, Swarnkar T (2020) Importance of data standardization methods on stock indices prediction accuracy. In: Advanced computing and intelligent engineering. Springer, Cham, pp 309?318
go back to reference Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Represent 60:346?359CrossRef Kuo CCJ, Zhang M, Li S, Duan J, Chen Y (2019) Interpretable convolutional neural networks via feedforward design. J Vis Commun Image Represent 60:346?359CrossRef
go back to reference Langer M, Oster D, Speith T, Hermanns H, Kästner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)??A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473MathSciNetMATHCrossRef Langer M, Oster D, Speith T, Hermanns H, Kästner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)??A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473MathSciNetMATHCrossRef
go back to reference Lapchak PA, Zhang JH (2018) Data standardization and quality management. Transl Stroke Res 9(1):4?8CrossRef Lapchak PA, Zhang JH (2018) Data standardization and quality management. Transl Stroke Res 9(1):4?8CrossRef
go back to reference Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) The LRP toolbox for artificial neural networks. J Mach Learn Res 17(1):3938?3942MathSciNetMATH Lapuschkin S, Binder A, Montavon G, Müller KR, Samek W (2016) The LRP toolbox for artificial neural networks. J Mach Learn Res 17(1):3938?3942MathSciNetMATH
go back to reference Latouche P, Robin S, Ouadah S (2018) Goodness of fit of logistic regression models for random graphs. J Comput Graph Stat 27(1):98?109MathSciNetMATHCrossRef Latouche P, Robin S, Ouadah S (2018) Goodness of fit of logistic regression models for random graphs. J Comput Graph Stat 27(1):98?109MathSciNetMATHCrossRef
go back to reference Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jørgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(1):1?11CrossRef Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jørgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(1):1?11CrossRef
go back to reference Lawless WF, Mittu R, Sofge D, Hiatt L (2019) Artificial intelligence, autonomy, and human-machine teams: interdependence, context, and explainable AI. AI Mag 40(3) Lawless WF, Mittu R, Sofge D, Hiatt L (2019) Artificial intelligence, autonomy, and human-machine teams: interdependence, context, and explainable AI. AI Mag 40(3)
go back to reference Lee D, Mulrow J, Haboucha CJ, Derrible S, Shiftan Y (2019) Attitudes on autonomous vehicle adoption using interpretable gradient boosting machine. Transp Res Rec, p 0361198119857953 Lee D, Mulrow J, Haboucha CJ, Derrible S, Shiftan Y (2019) Attitudes on autonomous vehicle adoption using interpretable gradient boosting machine. Transp Res Rec, p 0361198119857953
go back to reference Li K, Hu C, Liu G, Xue W (2015) Building?s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build 108:106?113CrossRef Li K, Hu C, Liu G, Xue W (2015) Building?s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build 108:106?113CrossRef
go back to reference Liang S, Sabri AQM, Alnajjar F, Loo CK (2021) Autism spectrum self-stimulatory behaviours classification using explainable temporal coherency deep features and SVM classifier. IEEE Access Liang S, Sabri AQM, Alnajjar F, Loo CK (2021) Autism spectrum self-stimulatory behaviours classification using explainable temporal coherency deep features and SVM classifier. IEEE Access
go back to reference Liberati C, Camillo F, Saporta G (2017) Advances in credit scoring: combining performance and interpretation in kernel discriminant analysis. Adv Data Anal Classif 11(1):121?138MathSciNetMATHCrossRef Liberati C, Camillo F, Saporta G (2017) Advances in credit scoring: combining performance and interpretation in kernel discriminant analysis. Adv Data Anal Classif 11(1):121?138MathSciNetMATHCrossRef
go back to reference Lin YC, Lee YC, Tsai WC, Beh WK, Wu AYA (2020) Explainable deep neural network for identifying cardiac abnormalities using class activation map. In: 2020 Computing in cardiology. IEEE, pp 1?4 Lin YC, Lee YC, Tsai WC, Beh WK, Wu AYA (2020) Explainable deep neural network for identifying cardiac abnormalities using class activation map. In: 2020 Computing in cardiology. IEEE, pp 1?4
go back to reference Liu YJ, Ma C, Zhao G, Fu X, Wang H, Dai G, Xie L (2016) An interactive spiraltape video summarization. IEEE Trans Multimed 18(7):1269?1282CrossRef Liu YJ, Ma C, Zhao G, Fu X, Wang H, Dai G, Xie L (2016) An interactive spiraltape video summarization. IEEE Trans Multimed 18(7):1269?1282CrossRef
go back to reference Liu Z, Tang B, Wang X, Chen Q (2017) De-identification of clinical notes via recurrent neural network and conditional random field. J Biomed Inform 75:S34?S42CrossRef Liu Z, Tang B, Wang X, Chen Q (2017) De-identification of clinical notes via recurrent neural network and conditional random field. J Biomed Inform 75:S34?S42CrossRef
go back to reference Liu P, Zhang L, Gulla JA (2020) Dynamic attention-based explainable recommendation with textual and visual fusion. Inf Process Manag 57(6):102099CrossRef Liu P, Zhang L, Gulla JA (2020) Dynamic attention-based explainable recommendation with textual and visual fusion. Inf Process Manag 57(6):102099CrossRef
go back to reference Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071?3085CrossRef Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071?3085CrossRef
go back to reference Loor M, De Tré G (2020) Contextualizing support vector machine predictions. Int J Comput Intell Syst 13(1):1483?1497CrossRef Loor M, De Tré G (2020) Contextualizing support vector machine predictions. Int J Comput Intell Syst 13(1):1483?1497CrossRef
go back to reference Luo X, Chang X, Ban X (2016) Regression and classification using extreme learning machine based on L1-norm and L2-norm. Neurocomputing 174:179?186CrossRef Luo X, Chang X, Ban X (2016) Regression and classification using extreme learning machine based on L1-norm and L2-norm. Neurocomputing 174:179?186CrossRef
go back to reference Ma Y, Chen W, Ma X, Xu J, Huang X, Maciejewski R, Tung AK (2017) EasySVM: a visual analysis approach for open-box support vector machines. Comput Vis Media 3(2):161?175CrossRef Ma Y, Chen W, Ma X, Xu J, Huang X, Maciejewski R, Tung AK (2017) EasySVM: a visual analysis approach for open-box support vector machines. Comput Vis Media 3(2):161?175CrossRef
go back to reference Manica M, Oskooei A, Born J, Subramanian V, Sáez-Rodríguez J, Rodriguez Martinez M (2019) Toward explainable anticancer compound sensitivity prediction via multimodal attention-based convolutional encoders. Mol Pharm 16(12):4797?4806CrossRef Manica M, Oskooei A, Born J, Subramanian V, Sáez-Rodríguez J, Rodriguez Martinez M (2019) Toward explainable anticancer compound sensitivity prediction via multimodal attention-based convolutional encoders. Mol Pharm 16(12):4797?4806CrossRef
go back to reference Martini ML, Neifert SN, Gal JS, Oermann EK, Gilligan JT, Caridi JM (2021) Drivers of prolonged hospitalization following spine surgery: a game-theory-based approach to explaining machine learning models. JBJS 103(1):64?73CrossRef Martini ML, Neifert SN, Gal JS, Oermann EK, Gilligan JT, Caridi JM (2021) Drivers of prolonged hospitalization following spine surgery: a game-theory-based approach to explaining machine learning models. JBJS 103(1):64?73CrossRef
go back to reference Maweu BM, Dakshit S, Shamsuddin R, Prabhakaran B (2021) CEFEs: a CNN explainable framework for ECG signals. Artif Intell Med 102059 Maweu BM, Dakshit S, Shamsuddin R, Prabhakaran B (2021) CEFEs: a CNN explainable framework for ECG signals. Artif Intell Med 102059
go back to reference Meske C, Bunde E, Schneider J, Gersch M (2020) Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf Syst Manag 1?11 Meske C, Bunde E, Schneider J, Gersch M (2020) Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf Syst Manag 1?11
go back to reference Minh DL, Sadeghi-Niaraki A, Huy HD, Min K, Moon H (2018) Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network. IEEE Access 6:55392?55404CrossRef Minh DL, Sadeghi-Niaraki A, Huy HD, Min K, Moon H (2018) Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network. IEEE Access 6:55392?55404CrossRef
go back to reference Mohit, Kumari AC, Sharma M (2019) A novel approach to text clustering using shift k-medoid. Int J Soc Comput Cyber Phys Syst 2(2):106?118 Mohit, Kumari AC, Sharma M (2019) A novel approach to text clustering using shift k-medoid. Int J Soc Comput Cyber Phys Syst 2(2):106?118
go back to reference Molnar C, Casalicchio G, Bischl B (2019) Quantifying model complexity via functional decomposition for better post-hoc interpretability. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Cham, pp 193?204 Molnar C, Casalicchio G, Bischl B (2019) Quantifying model complexity via functional decomposition for better post-hoc interpretability. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Cham, pp 193?204
go back to reference Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211?222CrossRef Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211?222CrossRef
go back to reference Moradi M, Samwald M (2021) Post-hoc explanation of black-box classifiers using confident itemsets. Expert Syst Appl 165:113941CrossRef Moradi M, Samwald M (2021) Post-hoc explanation of black-box classifiers using confident itemsets. Expert Syst Appl 165:113941CrossRef
go back to reference Muller H, Mayrhofer MT, Van Veen EB, Holzinger A (2021) The ten commandments of ethical medical AI. Computer 54(07):119?123CrossRef Muller H, Mayrhofer MT, Van Veen EB, Holzinger A (2021) The ten commandments of ethical medical AI. Computer 54(07):119?123CrossRef
go back to reference Musto C, de Gemmis M, Lops P, Semeraro G (2020) Generating post hoc review-based natural language justifications for recommender systems. User Model User Adapt Interact 1?45 Musto C, de Gemmis M, Lops P, Semeraro G (2020) Generating post hoc review-based natural language justifications for recommender systems. User Model User Adapt Interact 1?45
go back to reference Neto MP, Paulovich FV (2020) Explainable matrix?visualization for global and local interpretability of random forest classification ensembles. IEEE Trans Vis Comput Graph Neto MP, Paulovich FV (2020) Explainable matrix?visualization for global and local interpretability of random forest classification ensembles. IEEE Trans Vis Comput Graph
go back to reference Ng SF, Chew YM, Chng PE, Ng KS (2018) An insight of linear regression analysis. Sci Res J 15(2):1?16CrossRef Ng SF, Chew YM, Chng PE, Ng KS (2018) An insight of linear regression analysis. Sci Res J 15(2):1?16CrossRef
go back to reference Nguyen TN, Lee S, Nguyen-Xuan H, Lee J (2019) A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput Methods Appl Mech Eng 354:506?526MathSciNetMATHCrossRef Nguyen TN, Lee S, Nguyen-Xuan H, Lee J (2019) A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput Methods Appl Mech Eng 354:506?526MathSciNetMATHCrossRef
go back to reference Nguyen DT, Kasmarik KE, Abbass HA (2020a) Towards interpretable neural networks: an exact transformation to multi-class multivariate decision trees. arXiv preprint arXiv:200304675 Nguyen DT, Kasmarik KE, Abbass HA (2020a) Towards interpretable neural networks: an exact transformation to multi-class multivariate decision trees. arXiv preprint arXiv:200304675
go back to reference Nguyen TN, Nguyen-Xuan H, Lee J (2020b) A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem Anal Des 171:103377MathSciNetCrossRef Nguyen TN, Nguyen-Xuan H, Lee J (2020b) A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem Anal Des 171:103377MathSciNetCrossRef
go back to reference Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64?82CrossRef Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64?82CrossRef
go back to reference Olah C, Satyanarayan A, Johnson I, Carter S, Schubert L, Ye K, Mordvintsev A (2018) The building blocks of interpretability. Distill 3(3):e10CrossRef Olah C, Satyanarayan A, Johnson I, Carter S, Schubert L, Ye K, Mordvintsev A (2018) The building blocks of interpretability. Distill 3(3):e10CrossRef
go back to reference Ostad-Ali-Askari K, Shayannejad M (2021) Computation of subsurface drain spacing in the unsteady conditions using artificial neural networks (ANN). Appl Water Sci 11(2):1?9CrossRef Ostad-Ali-Askari K, Shayannejad M (2021) Computation of subsurface drain spacing in the unsteady conditions using artificial neural networks (ANN). Appl Water Sci 11(2):1?9CrossRef
go back to reference Ostad-Ali-Askari K, Shayannejad M, Ghorbanizadeh-Kharazi H (2017) Artificial neural network for modeling nitrate pollution of groundwater in marginal area of Zayandeh-rood river, Isfahan, Iran. KSCE J Civ Eng 21(1):134?140CrossRef Ostad-Ali-Askari K, Shayannejad M, Ghorbanizadeh-Kharazi H (2017) Artificial neural network for modeling nitrate pollution of groundwater in marginal area of Zayandeh-rood river, Isfahan, Iran. KSCE J Civ Eng 21(1):134?140CrossRef
go back to reference Osullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robotics Comput Assist Surg 15(1):e1968CrossRef Osullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robotics Comput Assist Surg 15(1):e1968CrossRef
go back to reference Padarian J, McBratney AB, Minasny B (2020) Game theory interpretation of digital soil mapping convolutional neural networks. Soil 6(2):389?397CrossRef Padarian J, McBratney AB, Minasny B (2020) Game theory interpretation of digital soil mapping convolutional neural networks. Soil 6(2):389?397CrossRef
go back to reference Páez A (2019) The pragmatic turn in explainable artificial intelligence (XAI). Mind Mach 29(3):441?459CrossRef Páez A (2019) The pragmatic turn in explainable artificial intelligence (XAI). Mind Mach 29(3):441?459CrossRef
go back to reference Pan X, Tang F, Dong W, Ma C, Meng Y, Huang F, Lee TY, Xu C (2019) Content-based visual summarization for image collections. IEEE Transa Vis Comput Graph Pan X, Tang F, Dong W, Ma C, Meng Y, Huang F, Lee TY, Xu C (2019) Content-based visual summarization for image collections. IEEE Transa Vis Comput Graph
go back to reference Park DH, Hendricks LA, Akata Z, Rohrbach A, Schiele B, Darrell T, Rohrbach M (2018) Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8779?8788 Park DH, Hendricks LA, Akata Z, Rohrbach A, Schiele B, Darrell T, Rohrbach M (2018) Multimodal explanations: justifying decisions and pointing to the evidence. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8779?8788
go back to reference Payer C, Stern D, Bischof H, Urschler M (2019) Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med Image Anal 54:207?219CrossRef Payer C, Stern D, Bischof H, Urschler M (2019) Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med Image Anal 54:207?219CrossRef
go back to reference Peloquin D, DiMaio M, Bierer B, Barnes M (2020) Disruptive and avoidable: GDPR challenges to secondary research uses of data. Eur J Hum Genet 28(6):697?705CrossRef Peloquin D, DiMaio M, Bierer B, Barnes M (2020) Disruptive and avoidable: GDPR challenges to secondary research uses of data. Eur J Hum Genet 28(6):697?705CrossRef
go back to reference Polato M, Aiolli F (2019) Boolean kernels for rule based interpretation of support vector machines. Neurocomputing 342:113?124CrossRef Polato M, Aiolli F (2019) Boolean kernels for rule based interpretation of support vector machines. Neurocomputing 342:113?124CrossRef
go back to reference Raaijmakers S (2019) Artificial intelligence for law enforcement: challenges and opportunities. IEEE Secur Priv 17(5):74?77CrossRef Raaijmakers S (2019) Artificial intelligence for law enforcement: challenges and opportunities. IEEE Secur Priv 17(5):74?77CrossRef
go back to reference Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48(1):137?141CrossRef Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48(1):137?141CrossRef
go back to reference Rajapaksha D, Bergmeir C, Buntine W (2020) LoRMIkA: local rule-based model interpretability with k-optimal associations. Inf Sci 540:221?241MathSciNetCrossRef Rajapaksha D, Bergmeir C, Buntine W (2020) LoRMIkA: local rule-based model interpretability with k-optimal associations. Inf Sci 540:221?241MathSciNetCrossRef
go back to reference Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, Liu PJ, Liu X, Marcus J, Sun M et al (2018) Scalable and accurate deep learning with electronic health records. NPJ Digit Med 1(1):1?10CrossRef Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, Hardt M, Liu PJ, Liu X, Marcus J, Sun M et al (2018) Scalable and accurate deep learning with electronic health records. NPJ Digit Med 1(1):1?10CrossRef
go back to reference Ren X, Xing Z, Xia X, Lo D, Wang X, Grundy J (2019) Neural network-based detection of self-admitted technical debt: from performance to explainability. ACM Trans Softw Eng Methodol (TOSEM) 28(3):1?45CrossRef Ren X, Xing Z, Xia X, Lo D, Wang X, Grundy J (2019) Neural network-based detection of self-admitted technical debt: from performance to explainability. ACM Trans Softw Eng Methodol (TOSEM) 28(3):1?45CrossRef
go back to reference Ribeiro MT, Singh S, Guestrin C (2016) ?Why should I trust you?? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135?1144 Ribeiro MT, Singh S, Guestrin C (2016) ?Why should I trust you?? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135?1144
go back to reference Ribeiro PC, Schardong GG, Barbosa SD, de Souza CS, Lopes H (2019) Visual exploration of an ensemble of classifiers. Comput Graph 85:23?41CrossRef Ribeiro PC, Schardong GG, Barbosa SD, de Souza CS, Lopes H (2019) Visual exploration of an ensemble of classifiers. Comput Graph 85:23?41CrossRef
go back to reference Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206?215CrossRef Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206?215CrossRef
go back to reference Sabol P, Sinčák P, Hartono P, Kočan P, Benetinová Z, Blichárová A, Verbóová Ľ, Štammová E, Sabolová-Fabianová A, Jašková A (2020) Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. J Biomed Inform 109:103523 Sabol P, Sinčák P, Hartono P, Kočan P, Benetinová Z, Blichárová A, Verbóová Ľ, Štammová E, Sabolová-Fabianová A, Jašková A (2020) Explainable classifier for improving the accountability in decision-making for colorectal cancer diagnosis from histopathological images. J Biomed Inform 109:103523
go back to reference Sagi O, Rokach L (2020) Explainable decision forest: transforming a decision forest into an interpretable tree. Inf Fusion 61:124?138CrossRef Sagi O, Rokach L (2020) Explainable decision forest: transforming a decision forest into an interpretable tree. Inf Fusion 61:124?138CrossRef
go back to reference Salmeron JL, Correia MB, Palos-Sanchez PR (2019) Complexity in forecasting and predictive models. Complexity 2019 Salmeron JL, Correia MB, Palos-Sanchez PR (2019) Complexity in forecasting and predictive models. Complexity 2019
go back to reference Sanz H, Valim C, Vegas E, Oller JM, Reverter F (2018) SVM-RFE: selection and visualization of the most relevant features through non-linear kernels. BMC Bioinform 19(1):1?18CrossRef Sanz H, Valim C, Vegas E, Oller JM, Reverter F (2018) SVM-RFE: selection and visualization of the most relevant features through non-linear kernels. BMC Bioinform 19(1):1?18CrossRef
go back to reference Sarvghad A, Tory M, Mahyar N (2016) Visualizing dimension coverage to support exploratory analysis. IEEE Trans Visual Comput Graph 23(1):21?30CrossRef Sarvghad A, Tory M, Mahyar N (2016) Visualizing dimension coverage to support exploratory analysis. IEEE Trans Visual Comput Graph 23(1):21?30CrossRef
go back to reference Schneeberger D, Stöger K, Holzinger A (2020) The European legal framework for medical AI. In: International cross-domain conference for machine learning and knowledge extraction. Springer, Cham, pp 209?226 Schneeberger D, Stöger K, Holzinger A (2020) The European legal framework for medical AI. In: International cross-domain conference for machine learning and knowledge extraction. Springer, Cham, pp 209?226
go back to reference Self JZ, Dowling M, Wenskovitch J, Crandell I, Wang M, House L, Leman S, North C (2018) Observation-level and parametric interaction for high-dimensional data analysis. ACM Trans Interact Intell Syst (TIIS) 8(2):1?36CrossRef Self JZ, Dowling M, Wenskovitch J, Crandell I, Wang M, House L, Leman S, North C (2018) Observation-level and parametric interaction for high-dimensional data analysis. ACM Trans Interact Intell Syst (TIIS) 8(2):1?36CrossRef
go back to reference Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128(2):336?359CrossRef Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-cam: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128(2):336?359CrossRef
go back to reference Setzu M, Guidotti R, Monreale A, Turini F, Pedreschi D, Giannotti F (2021) Glocalx-from local to global explanations of black box AI models. Artif Intell 294:103457MathSciNetMATHCrossRef Setzu M, Guidotti R, Monreale A, Turini F, Pedreschi D, Giannotti F (2021) Glocalx-from local to global explanations of black box AI models. Artif Intell 294:103457MathSciNetMATHCrossRef
go back to reference Shi L, Teng Z, Wang L, Zhang Y, Binder A (2018) Deepclue: visual interpretation of text-based deep stock prediction. IEEE Trans Knowl Data Eng 31(6):1094?1108CrossRef Shi L, Teng Z, Wang L, Zhang Y, Binder A (2018) Deepclue: visual interpretation of text-based deep stock prediction. IEEE Trans Knowl Data Eng 31(6):1094?1108CrossRef
go back to reference Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International conference on machine learning. PMLR, pp 3145?3153 Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In: International conference on machine learning. PMLR, pp 3145?3153
go back to reference Singh N, Singh P, Bhagat D (2019) A rule extraction approach from support vector machines for diagnosing hypertension among diabetics. Expert Syst Appl 130:188?205CrossRef Singh N, Singh P, Bhagat D (2019) A rule extraction approach from support vector machines for diagnosing hypertension among diabetics. Expert Syst Appl 130:188?205CrossRef
go back to reference Singh A, Sengupta S, Lakshminarayanan V (2020) Explainable deep learning models in medical image analysis. J Imaging 6(6):52CrossRef Singh A, Sengupta S, Lakshminarayanan V (2020) Explainable deep learning models in medical image analysis. J Imaging 6(6):52CrossRef
go back to reference Song S, Huang H, Ruan T (2019) Abstractive text summarization using LSTM-CNN based deep learning. Multimed Tools Appl 78(1):857?875CrossRef Song S, Huang H, Ruan T (2019) Abstractive text summarization using LSTM-CNN based deep learning. Multimed Tools Appl 78(1):857?875CrossRef
go back to reference Spinner T, Schlegel U, Schäfer H, El-Assady M (2019) explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Vis Comput Graph 26(1):1064?1074 Spinner T, Schlegel U, Schäfer H, El-Assady M (2019) explAIner: a visual analytics framework for interactive and explainable machine learning. IEEE Trans Vis Comput Graph 26(1):1064?1074
go back to reference Stojić A, Stanić N, Vuković G, Stanišić S, Perišić M, Šoštarić A, Lazić L (2019) Explainable extreme gradient boosting tree-based prediction of toluene, ethylbenzene and xylene wet deposition. Sci Total Environ 653:140?147 Stojić A, Stanić N, Vuković G, Stanišić S, Perišić M, Šoštarić A, Lazić L (2019) Explainable extreme gradient boosting tree-based prediction of toluene, ethylbenzene and xylene wet deposition. Sci Total Environ 653:140?147
go back to reference Strobelt H, Gehrmann S, Pfister H, Rush AM (2017) Lstmvis: a tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Trans Vis Comput Graph 24(1):667?676CrossRef Strobelt H, Gehrmann S, Pfister H, Rush AM (2017) Lstmvis: a tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Trans Vis Comput Graph 24(1):667?676CrossRef
go back to reference Strobelt H, Gehrmann S, Behrisch M, Perer A, Pfister H, Rush AM (2018) SEQ2SEQ-VIS: a visual debugging tool for sequence-to-sequence models. IEEE Trans Vis Comput Graph 25(1):353?363CrossRef Strobelt H, Gehrmann S, Behrisch M, Perer A, Pfister H, Rush AM (2018) SEQ2SEQ-VIS: a visual debugging tool for sequence-to-sequence models. IEEE Trans Vis Comput Graph 25(1):353?363CrossRef
go back to reference Štrumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41(3):647?665CrossRef Štrumbelj E, Kononenko I (2014) Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41(3):647?665CrossRef
go back to reference Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828?841CrossRef Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828?841CrossRef
go back to reference Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Cham, pp 543?585 Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Cham, pp 543?585
go back to reference Tan Q, Ye M, Ma AJ, Yang B, Yip TCF, Wong GLH, Yuen PC (2020) Explainable uncertainty-aware convolutional recurrent neural network for irregular medical time series. IEEE Trans Neural Netw Learn Syst Tan Q, Ye M, Ma AJ, Yang B, Yip TCF, Wong GLH, Yuen PC (2020) Explainable uncertainty-aware convolutional recurrent neural network for irregular medical time series. IEEE Trans Neural Netw Learn Syst
go back to reference Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans Neural Netw Learn Syst Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Trans Neural Netw Learn Syst
go back to reference Turkay C, Kaya E, Balcisoy S, Hauser H (2016) Designing progressive and interactive analytics processes for high-dimensional data analysis. IEEE Trans Vis Comput Graph 23(1):131?140CrossRef Turkay C, Kaya E, Balcisoy S, Hauser H (2016) Designing progressive and interactive analytics processes for high-dimensional data analysis. IEEE Trans Vis Comput Graph 23(1):131?140CrossRef
go back to reference Van Belle V, Van Calster B, Van Huffel S, Suykens JA, Lisboa P (2016) Explaining support vector machines: a color based nomogram. PLoS ONE 11(10):e0164568CrossRef Van Belle V, Van Calster B, Van Huffel S, Suykens JA, Lisboa P (2016) Explaining support vector machines: a color based nomogram. PLoS ONE 11(10):e0164568CrossRef
go back to reference Van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the national conference on artificial intelligence. AAAI Press; MIT Press, Menlo Park, London, pp 900?907 Van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the national conference on artificial intelligence. AAAI Press; MIT Press, Menlo Park, London, pp 900?907
go back to reference Van Luong H, Joukovsky B, Deligiannis N (2021) Designing interpretable recurrent neural networks for video reconstruction via deep unfolding. IEEE Trans Image Process 30:4099?4113MathSciNetCrossRef Van Luong H, Joukovsky B, Deligiannis N (2021) Designing interpretable recurrent neural networks for video reconstruction via deep unfolding. IEEE Trans Image Process 30:4099?4113MathSciNetCrossRef
go back to reference Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans Royal Soc A Math Phys Eng Sci 376(2133):20180083 Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans Royal Soc A Math Phys Eng Sci 376(2133):20180083
go back to reference Vellido A (2019) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 1?15 Vellido A (2019) The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput Appl 1?15
go back to reference Waa J, Nieuwburg E, Cremers A, Neerincx M (2021) Evaluating XAI: a comparison of rule-based and example-based explanations. Artif Intell 291:103404MathSciNetMATHCrossRef Waa J, Nieuwburg E, Cremers A, Neerincx M (2021) Evaluating XAI: a comparison of rule-based and example-based explanations. Artif Intell 291:103404MathSciNetMATHCrossRef
go back to reference Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76?99CrossRef Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76?99CrossRef
go back to reference Wang SC (2003) Artificial neural network. In: Interdisciplinary computing in java programming. Springer, Cham, pp 81?100 Wang SC (2003) Artificial neural network. In: Interdisciplinary computing in java programming. Springer, Cham, pp 81?100
go back to reference Wang B, Gong NZ (2018) Stealing hyperparameters in machine learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 36?52 Wang B, Gong NZ (2018) Stealing hyperparameters in machine learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 36?52
go back to reference Wang H, Yeung DY (2016) Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans Knowl Data Eng 28(12):3395?3408CrossRef Wang H, Yeung DY (2016) Towards Bayesian deep learning: a framework and some existing methods. IEEE Trans Knowl Data Eng 28(12):3395?3408CrossRef
go back to reference Wang Y, Aghaei F, Zarafshani A, Qiu Y, Qian W, Zheng B (2017) Computer-aided classification of mammographic masses using visually sensitive image features. J Xray Sci Technol 25(1):171?186 Wang Y, Aghaei F, Zarafshani A, Qiu Y, Qian W, Zheng B (2017) Computer-aided classification of mammographic masses using visually sensitive image features. J Xray Sci Technol 25(1):171?186
go back to reference Wang Q, Zhang K, Ororbia AG II, Xing X, Liu X, Giles CL (2018) An empirical evaluation of rule extraction from recurrent neural networks. Neural Comput 30(9):2568?2591MathSciNetCrossRef Wang Q, Zhang K, Ororbia AG II, Xing X, Liu X, Giles CL (2018) An empirical evaluation of rule extraction from recurrent neural networks. Neural Comput 30(9):2568?2591MathSciNetCrossRef
go back to reference Wang F, Kaushal R, Khullar D (2019b) Should health care demand interpretable artificial intelligence or accept ?black box? medicine? Ann Intern Med Wang F, Kaushal R, Khullar D (2019b) Should health care demand interpretable artificial intelligence or accept ?black box? medicine? Ann Intern Med
go back to reference Wang S, Zhou T, Bilmes J (2019c) Bias also matters: bias attribution for deep neural network explanation. In: International conference on machine learning. PMLR, pp 6659?6667 Wang S, Zhou T, Bilmes J (2019c) Bias also matters: bias attribution for deep neural network explanation. In: International conference on machine learning. PMLR, pp 6659?6667
go back to reference Wang Y, Wang D, Geng N, Wang Y, Yin Y, Jin Y (2019d) Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Appl Soft Comput 77:188?204CrossRef Wang Y, Wang D, Geng N, Wang Y, Yin Y, Jin Y (2019d) Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Appl Soft Comput 77:188?204CrossRef
go back to reference Wasilow S, Thorpe JB (2019) Artificial intelligence, robotics, ethics, and the military: a Canadian perspective. AI Mag 40(1) Wasilow S, Thorpe JB (2019) Artificial intelligence, robotics, ethics, and the military: a Canadian perspective. AI Mag 40(1)
go back to reference Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) ?Let me explain!?: exploring the potential of virtual agents in explainable AI interaction design. J Multimodal User Interfaces 1?12 Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) ?Let me explain!?: exploring the potential of virtual agents in explainable AI interaction design. J Multimodal User Interfaces 1?12
go back to reference Wickstrøm KK, ØyvindMikalsen K, Kampffmeyer M, Revhaug A, Jenssen R (2020) Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE J Biomed Health Inform Wickstrøm KK, ØyvindMikalsen K, Kampffmeyer M, Revhaug A, Jenssen R (2020) Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE J Biomed Health Inform
go back to reference Williford JR, May BB, Byrne J (2020) Explainable face recognition. In: European Conference on computer vision. Springer, Cham, pp 248?263 Williford JR, May BB, Byrne J (2020) Explainable face recognition. In: European Conference on computer vision. Springer, Cham, pp 248?263
go back to reference Wu Q, Burges CJ, Svore KM, Gao J (2010) Adapting boosting for information retrieval measures. Inf Retr 13(3):254?270CrossRef Wu Q, Burges CJ, Svore KM, Gao J (2010) Adapting boosting for information retrieval measures. Inf Retr 13(3):254?270CrossRef
go back to reference Wu J, Zhong Sh, Jiang J, Yang Y (2017) A novel clustering method for static video summarization. Multimed Tools Appl 76(7):9625?9641CrossRef Wu J, Zhong Sh, Jiang J, Yang Y (2017) A novel clustering method for static video summarization. Multimed Tools Appl 76(7):9625?9641CrossRef
go back to reference Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI conference on artificial intelligence, vol 32 Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI conference on artificial intelligence, vol 32
go back to reference Xu J, Zhang Z, Friedman T, Liang Y, Broeck G (2018) A semantic loss function for deep learning with symbolic knowledge. In: International conference on machine learning. PMLR, pp 5502?5511 Xu J, Zhang Z, Friedman T, Liang Y, Broeck G (2018) A semantic loss function for deep learning with symbolic knowledge. In: International conference on machine learning. PMLR, pp 5502?5511
go back to reference Yamamoto Y, Tsuzuki T, Akatsuka J, Ueki M, Morikawa H, Numata Y, Takahara T, Tsuyuki T, Tsutsumi K, Nakazawa R et al (2019) Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun 10(1):1?9CrossRef Yamamoto Y, Tsuzuki T, Akatsuka J, Ueki M, Morikawa H, Numata Y, Takahara T, Tsuyuki T, Tsutsumi K, Nakazawa R et al (2019) Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun 10(1):1?9CrossRef
go back to reference Yang SCH, Shafto P (2017) Explainable artificial intelligence via Bayesian teaching. In: NIPS 2017 workshop on teaching machines, robots, and humans, pp 127?137 Yang SCH, Shafto P (2017) Explainable artificial intelligence via Bayesian teaching. In: NIPS 2017 workshop on teaching machines, robots, and humans, pp 127?137
go back to reference Yang Z, Zhang A, Sudjianto A (2020) Enhancing explainability of neural networks through architecture constraints. IEEE Trans Neural Netw Learn Syst Yang Z, Zhang A, Sudjianto A (2020) Enhancing explainability of neural networks through architecture constraints. IEEE Trans Neural Netw Learn Syst
go back to reference Yeganejou M, Dick S, Miller J (2019) Interpretable deep convolutional fuzzy classifier. IEEE Trans Fuzzy Syst 28(7):1407?1419 Yeganejou M, Dick S, Miller J (2019) Interpretable deep convolutional fuzzy classifier. IEEE Trans Fuzzy Syst 28(7):1407?1419
go back to reference Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:150606579 Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. arXiv preprint arXiv:150606579
go back to reference Yousefi-Azar M, Hamey L (2017) Text summarization using unsupervised deep learning. Expert Syst Appl 68:93?105CrossRef Yousefi-Azar M, Hamey L (2017) Text summarization using unsupervised deep learning. Expert Syst Appl 68:93?105CrossRef
go back to reference Yu H, Yang S, Gu W, Zhang S (2017) Baidu driving dataset and end-to-end reactive control model. In: 2017 IEEE intelligent vehicles symposium (IV). IEEE, pp 341?346 Yu H, Yang S, Gu W, Zhang S (2017) Baidu driving dataset and end-to-end reactive control model. In: 2017 IEEE intelligent vehicles symposium (IV). IEEE, pp 341?346
go back to reference Yuan J, Xiong HC, Xiao Y, Guan W, Wang M, Hong R, Li ZY (2020) Gated CNN: Integrating multi-scale feature layers for object detection. Pattern Recogn 105:107131CrossRef Yuan J, Xiong HC, Xiao Y, Guan W, Wang M, Hong R, Li ZY (2020) Gated CNN: Integrating multi-scale feature layers for object detection. Pattern Recogn 105:107131CrossRef
go back to reference Zeltner D, Schmid B, Csiszár G, Csiszár O (2021) Squashing activation functions in benchmark tests: towards a more explainable artificial intelligence using continuous-valued logic. Knowl Based Syst 218:106779CrossRef Zeltner D, Schmid B, Csiszár G, Csiszár O (2021) Squashing activation functions in benchmark tests: towards a more explainable artificial intelligence using continuous-valued logic. Knowl Based Syst 218:106779CrossRef
go back to reference Zhang Qs, Zhu SC (2018) Visual interpretability for deep learning: a survey. Fronti Inf Technol Electron Eng 19(1):27?39CrossRef Zhang Qs, Zhu SC (2018) Visual interpretability for deep learning: a survey. Fronti Inf Technol Electron Eng 19(1):27?39CrossRef
go back to reference Zhang J, Wang Y, Molino P, Li L, Ebert DS (2018a) Manifold: a model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Trans Vis Comput Graph 25(1):364?373CrossRef Zhang J, Wang Y, Molino P, Li L, Ebert DS (2018a) Manifold: a model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Trans Vis Comput Graph 25(1):364?373CrossRef
go back to reference Zhang Q, Nian Wu Y, Zhu SC (2018b) Interpretable convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8827?8836 Zhang Q, Nian Wu Y, Zhu SC (2018b) Interpretable convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8827?8836
go back to reference Zhang Q, Yang Y, Ma H, Wu YN (2019) Interpreting CNNs via decision trees. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6261?6270 Zhang Q, Yang Y, Ma H, Wu YN (2019) Interpreting CNNs via decision trees. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6261?6270
go back to reference Zhang A, Teng L, Alterovitz G (2020a) An explainable machine learning platform for pyrazinamide resistance prediction and genetic feature identification of mycobacterium tuberculosis. J Am Med Inform Assoc Zhang A, Teng L, Alterovitz G (2020a) An explainable machine learning platform for pyrazinamide resistance prediction and genetic feature identification of mycobacterium tuberculosis. J Am Med Inform Assoc
go back to reference Zhang M, You H, Kadam P, Liu S, Kuo CCJ (2020b) Pointhop: an explainable machine learning method for point cloud classification. IEEE Trans Multimed 22(7):1744?1755CrossRef Zhang M, You H, Kadam P, Liu S, Kuo CCJ (2020b) Pointhop: an explainable machine learning method for point cloud classification. IEEE Trans Multimed 22(7):1744?1755CrossRef
go back to reference Zhang W, Tang S, Su J, Xiao J, Zhuang Y (2020c) Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention. Multimed Tools Appl 1?16 Zhang W, Tang S, Su J, Xiao J, Zhuang Y (2020c) Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention. Multimed Tools Appl 1?16
go back to reference Zhang Z, Beck MW, Winkler DA, Huang B, Sibanda W, Goyal H et al (2018c) Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Ann Transl Med 6(11) Zhang Z, Beck MW, Winkler DA, Huang B, Sibanda W, Goyal H et al (2018c) Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Ann Transl Med 6(11)
go back to reference Zhao W, Du S (2016) Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach. IEEE Trans Geosci Remote Sens 54(8):4544?4554CrossRef Zhao W, Du S (2016) Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach. IEEE Trans Geosci Remote Sens 54(8):4544?4554CrossRef
go back to reference Zheng S, Ding C (2020) A group lasso based sparse KNN classifier. Pattern Recogn Lett 131:227?233CrossRef Zheng S, Ding C (2020) A group lasso based sparse KNN classifier. Pattern Recogn Lett 131:227?233CrossRef
go back to reference Zheng Xl, Zhu My, Li Qb, Chen Cc, Tan Yc (2019) FinBrain: when finance meets AI 2.0. Front Inf Technol Electron Eng 20(7):914?924CrossRef Zheng Xl, Zhu My, Li Qb, Chen Cc, Tan Yc (2019) FinBrain: when finance meets AI 2.0. Front Inf Technol Electron Eng 20(7):914?924CrossRef
go back to reference Zhou B, Bau D, Oliva A, Torralba A (2018a) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Mach Intell 41(9):2131?2145CrossRef Zhou B, Bau D, Oliva A, Torralba A (2018a) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Mach Intell 41(9):2131?2145CrossRef
go back to reference Zhou X, Jiang P, Wang X (2018b) Recognition of control chart patterns using fuzzy SVM with a hybrid kernel function. J Intell Manuf 29(1):51?67CrossRef Zhou X, Jiang P, Wang X (2018b) Recognition of control chart patterns using fuzzy SVM with a hybrid kernel function. J Intell Manuf 29(1):51?67CrossRef
go back to reference Zhuang Yt, Wu F, Chen C, Pan Yh (2017) Challenges and opportunities: from big data to knowledge in AI 2.0. Front Inf Technol Electron Eng 18(1):3?14CrossRef Zhuang Yt, Wu F, Chen C, Pan Yh (2017) Challenges and opportunities: from big data to knowledge in AI 2.0. Front Inf Technol Electron Eng 18(1):3?14CrossRef
Metadata
Title
Explainable artificial intelligence: a comprehensive review
Authors
Dang Minh
H. Xiang Wang
Y. Fen Li
Tan N. Nguyen
Publication date
18-11-2021
Publisher
Springer Netherlands
Published in
Artificial Intelligence Review / Issue 5/2022
Print ISSN: 0269-2821
Electronic ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-021-10088-y

Other articles of this Issue 5/2022

Artificial Intelligence Review 5/2022 Go to the issue

Premium Partner