skip to main content
10.1145/3531146.3534639acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open Access

A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods

Authors Info & Claims
Published:20 June 2022Publication History

ABSTRACT

The recent surge in publications related to explainable artificial intelligence (XAI) has led to an almost insurmountable wall if one wants to get started or stay up to date with XAI. For this reason, articles and reviews that present taxonomies of XAI methods seem to be a welcomed way to get an overview of the field. Building on this idea, there is currently a trend of producing such taxonomies, leading to several competing approaches to construct them. In this paper, we will review recent approaches to constructing taxonomies of XAI methods and discuss general challenges concerning them as well as their individual advantages and limitations. Our review is intended to help scholars be aware of challenges current taxonomies face. As we will argue, when charting the field of XAI, it may not be sufficient to rely on one of the approaches we found. To amend this problem, we will propose and discuss three possible solutions: a new taxonomy that incorporates the reviewed ones, a database of XAI methods, and a decision tree to help choose fitting methods.

References

  1. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 Conference on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI 2018), Regan L. Mandryk, Mark Hancock, Mark Perry, and Anna L. Cox (Eds.). Association for Computing Machinery, New York, NY, USA, Article 582, 18 pages. https://doi.org/10.1145/3173574.3174156Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2020. Sanity Checks for Saliency Maps. arXiv:1810.03292Google ScholarGoogle Scholar
  3. Leila Amgoud and Henri Prade. 2009. Using Arguments for Making and Explaining Decisions. Artificial Intelligence 173, 3–4 (2009), 413–436. https://doi.org/10.1016/j.artint.2008.11.006Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Elvio G. Amparore, Alan Perotti, and Paolo Bajardi. 2021. To Trust or Not to Trust an Explanation: Using LEAF to Evaluate Local Linear XAI Methods. PeerJ Computer Science 7(2021), 1–26. https://doi.org/10.7717/peerj-cs.479Google ScholarGoogle ScholarCross RefCross Ref
  5. Mike Ananny and Kate Crawford. 2018. Seeing Without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability. New Media & Society 20, 3 (2018), 973–989. https://doi.org/10.1177/1461444816676645Google ScholarGoogle ScholarCross RefCross Ref
  6. Plamen P. Angelov, Eduardo A. Soares, Richard M. Jiang, Nicholas I. Arnold, and Peter M. Atkinson. 2021. Explainable Artificial Intelligence: An Analytical Review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11, 5, Article e1424 (2021), 13 pages. https://doi.org/10.1002/widm.1424Google ScholarGoogle ScholarCross RefCross Ref
  7. Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (Montréal, Québec, Canada) (AAMAS 2019), Edith Elkind, Manuela Veloso, Noa Agmon, and Matthew E. Taylor (Eds.). International Foundation for Autonomous Agents and Multiagent Systems, Richland County, SC, USA, 1078–1088. https://doi.org/10.5555/3306127.3331806Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2021. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012Google ScholarGoogle Scholar
  9. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 10, 7 (2015), 1–46. https://doi.org/10.1371/journal.pone.0130140Google ScholarGoogle ScholarCross RefCross Ref
  10. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-Lopez, Daniel Molina, Benjamins Richard, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI. Information Fusion 58(2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Kevin Baum, Holger Hermanns, and Timo Speith. 2018. From Machine Ethics to Machine Explainability and Back. In International Symposium on Artificial Intelligence and Mathematics (Fort Lauderdale, Florida, USA) (ISAIM 2018). International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA, 1–8. https://isaim2018.cs.ou.edu/papers/ISAIM2018_Ethics_Baum_etal.pdfGoogle ScholarGoogle Scholar
  12. Kevin Baum, Holger Hermanns, and Timo Speith. 2018. Towards a Framework Combining Machine Ethics and Machine Explainability. In Proceedings of the 3rd Workshop on Formal Reasoning about Causation, Responsibility, and Explanations in Science and Technology (Thessaloniki, Greece) (CREST 2018), Bernd Finkbeiner and Samantha Kleinberg (Eds.). Electronic Proceedings in Theoretical Computer Science, Sydney, AU, 34–49. https://doi.org/10.4204/EPTCS.286.4Google ScholarGoogle ScholarCross RefCross Ref
  13. Kevin Baum, Susanne Mantel, Eva Schmidt, and Timo Speith. 2022. From Responsibility to Reason-Giving Explainable Artificial Intelligence. Philosophy & Technology 35, 1 (2022), 1–30. https://doi.org/10.1007/s13347-022-00510-wGoogle ScholarGoogle ScholarCross RefCross Ref
  14. Vaishak Belle and Ioannis Papantonis. 2021. Principles and Practice of Explainable Machine Learning. Frontiers in Big Data 4, Article 688969 (2021), 25 pages. https://doi.org/10.3389/fdata.2021.688969Google ScholarGoogle ScholarCross RefCross Ref
  15. Jacob Bien and Robert Tibshirani. 2011. Prototype Selection for Interpretable Classification. The Annals of Applied Statistics 5, 4 (2011), 2403–2424. https://doi.org/10.1214/11-AOAS495Google ScholarGoogle ScholarCross RefCross Ref
  16. David C. Brock. 2018. Learning from Artificial Intelligence’s Previous Awakenings: The History of Expert Systems. AI Magazine 39, 3 (2018), 3–15. https://doi.org/10.1609/aimag.v39i3.2809Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wasja Brunotte, Larissa Chazette, Verena Klös, and Timo Speith. 2022. Quo Vadis, Explainability? – A Research Roadmap for Explainability Engineering. In Requirements Engineering: Foundation for Software Quality, Vincenzo Gervasi and Andreas Vogelsang (Eds.). Springer International Publishing, Cham, CH, 26–32. https://doi.org/10.1007/978-3-030-98464-9_3Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Jenna Burrell. 2016. How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society 3, 1 (2016), 1–12. https://doi.org/10.1177/2053951715622512Google ScholarGoogle ScholarCross RefCross Ref
  19. Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 8, 8, Article 832 (2019), 34 pages. https://doi.org/10.3390/electronics8080832Google ScholarGoogle ScholarCross RefCross Ref
  20. Davide Castelvecchi. 2016. Can we open the black box of AI?Nature 538, 7623 (2016), 20–23. https://doi.org/10.1038/538020aGoogle ScholarGoogle ScholarCross RefCross Ref
  21. Larissa Chazette, Wasja Brunotte, and Timo Speith. 2021. Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue. In IEEE 29th International Requirements Engineering Conference (South Bend, Indiana, USA) (RE 2021), Jane Cleland-Huang, Ana Moreira, Kurt Schneider, and Michael Vierhauser (Eds.). IEEE, Piscataway, NJ, USA, 197–208. https://doi.org/10.1109/RE51729.2021.00025Google ScholarGoogle ScholarCross RefCross Ref
  22. William J. Clancey. 1983. The Epistemology of a Rule-Based Expert System – A Framework for Explanation. Artificial Intelligence 20, 3 (1983), 215–251. https://doi.org/10.1016/0004-3702(83)90008-5Google ScholarGoogle ScholarCross RefCross Ref
  23. Miruna-Adriana Clinciu and Helen Hastie. 2019. A Survey of Explainable AI Terminology. In Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (Tokyo, Japan) (NL4XAI 2019), Jose M. Alonso and Alejandro Catala (Eds.). Association for Computational Linguistics, Stroudsburg, PA, USA, 8–13. https://doi.org/10.18653/v1/W19-8403Google ScholarGoogle ScholarCross RefCross Ref
  24. Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California, USA) (IUI 2019). Association for Computing Machinery, New York, NY, USA, 275–285. https://doi.org/10.1145/3301275.3302310Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Julia Dressel and Hany Farid. 2018. The Accuracy, Fairness, and Limits of Predicting Recidivism. Science Advances 4, 1 (2018), 1–5. https://doi.org/10.1126/sciadv.aao5580Google ScholarGoogle ScholarCross RefCross Ref
  26. Phan Minh Dung. 1995. On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-Person Games. Artificial Intelligence 77, 2 (1995), 321–357. https://doi.org/10.1016/0004-3702(94)00041-XGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  27. Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena. 2018. AI4People — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28, 4 (2018), 689–707. https://doi.org/10.1007/s11023-018-9482-5Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Victor Gijsbers. 2016. Explanatory Pluralism and the (Dis)Unity of Science: The Argument from Incompatible Counterfactual Consequences. Frontiers in Psychiatry 7, Article 32 (2016), 10 pages. https://doi.org/10.3389/fpsyt.2016.00032Google ScholarGoogle ScholarCross RefCross Ref
  29. Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, and Julius Adebayo. 2019. Explaining Explanations to Society. In Proceedings of the NeurIPS 2018 Workshop on Ethical, Social and Governance Issues in AI (Montréal, Québec, Canada), Chloé Bakalar, Sarah Bird, Tiberio Caetano, Edward Felten, Dario Garcia-Garcia, Isabel Kloumann, Finn Lattimore, Sendhil Mullainathan, and D. Sculley (Eds.). 1–6. arXiv:1901.06560Google ScholarGoogle Scholar
  30. Riccardo Guidotti, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. 2021. Principles of Explainable Artificial Intelligence. In Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications, Moamar Sayed-Mouchaweh (Ed.). Springer International Publishing, Cham, CH, Chapter 2, 9–31. https://doi.org/10.1007/978-3-030-76409-8_2Google ScholarGoogle ScholarCross RefCross Ref
  31. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (2019), 1–42. https://doi.org/10.1145/3236009Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Denis J Hilton. 1990. Conversational processes and causal explanation.Psychological Bulletin 107, 1 (1990), 65–81. https://doi.org/10.1037/0033-2909.107.1.65Google ScholarGoogle ScholarCross RefCross Ref
  33. Eric Hochstein. 2022. Foregrounding and Backgrounding: A New Interpretation of “Levels” in Science. European Journal for Philosophy of Science 12, 2, Article 23(2022), 22 pages. https://doi.org/10.1007/s13194-022-00457-xGoogle ScholarGoogle ScholarCross RefCross Ref
  34. Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal, and Heimo Müller. 2019. Causability and Explainability of Artificial Intelligence in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 9, 4, Article e1312 (2019), 13 pages. https://doi.org/10.1002/widm.1312Google ScholarGoogle ScholarCross RefCross Ref
  35. Maksims Ivanovs, Roberts Kadikis, and Kaspars Ozols. 2021. Perturbation-Based Methods for Explaining Deep Neural Networks: A Survey. Pattern Recognition Letters 150 (2021), 228–234. https://doi.org/10.1016/j.patrec.2021.06.030Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Lena Kästner. 2018. Integrating Mechanistic Explanations Through Epistemic Perspectives. Studies in History and Philosophy of Science Part A 68 (2018), 68–79. https://doi.org/10.1016/j.shpsa.2018.01.011Google ScholarGoogle ScholarCross RefCross Ref
  37. Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomäcker, Timo Speith, and Sarah Sterz. 2021. On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness. In 29th IEEE International Requirements Engineering Conference Workshops (Notre Dame, Indiana, USA) (REW 2021), Tao Yue and Mehdi Mirakhorli (Eds.). IEEE, Piscataway, NJ, USA, 169–175. https://doi.org/10.1109/REW53955.2021.00031Google ScholarGoogle ScholarCross RefCross Ref
  38. Been Kim, Oluwasanmi Koyejo, and Rajiv Khanna. 2016. Examples are not enough, learn to criticize! Criticism for Interpretability. In Advances in Neural Information Processing Systems 29 (Barcelona, Spain), Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (Eds.). Curran Associates, Inc., New York, NY, USA, 2280–2288. https://proceedings.neurips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.htmlGoogle ScholarGoogle Scholar
  39. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, and Rory Sayres. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning (Stockholm, Sweden) (ICML 2018), Jennifer G. Dy and Andreas Krause (Eds.). Microtome Publishing, Brookline, MA, USA, 2668–2677. http://proceedings.mlr.press/v80/kim18d.htmlGoogle ScholarGoogle Scholar
  40. Maximilian A. Köhl, Kevin Baum, Dimitri Bohlender, Markus Langer, Daniel Oster, and Timo Speith. 2019. Explainability as a Non-Functional Requirement. In IEEE 27th International Requirements Engineering Conference (Jeju Island, Republic of Korea) (RE 2019), Daniela E. Damian, Anna Perini, and Seok-Won Lee (Eds.). IEEE, Piscataway, NJ, USA, 363–368. https://doi.org/10.1109/RE.2019.00046Google ScholarGoogle ScholarCross RefCross Ref
  41. Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith, and Jonas Wahl. 2021. Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives. In 29th IEEE International Requirements Engineering Conference Workshops (Notre Dame, Indiana, USA) (REW 2021), Tao Yue and Mehdi Mirakhorli (Eds.). IEEE, Piscataway, NJ, USA, 164–168. https://doi.org/10.1109/REW53955.2021.00030Google ScholarGoogle ScholarCross RefCross Ref
  42. Markus Langer, Kevin Baum, Cornelius J. König, Viviane Hähne, Daniel Oster, and Timo Speith. 2021. Spare Me the Details: How the Type of Information About Automated Interviews Influences Applicant Reactions. International Journal of Selection and Assessment 29, 2(2021), 154–169. https://doi.org/10.1111/ijsa.12325Google ScholarGoogle ScholarCross RefCross Ref
  43. Markus Langer, Cornelius J. König, and Andromachi Fitili. 2018. Information as a Double-Edged Sword: The Role of Computer Experience and Information on Applicant Reactions Towards Novel Technologies for Personnel Selection. Computers in Human Behavior 81 (2018), 19–30. https://doi.org/10.1016/j.chb.2017.11.036Google ScholarGoogle ScholarCross RefCross Ref
  44. Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research. Articifial Intelligence 296, Article 103473(2021), 24 pages. https://doi.org/10.1016/j.artint.2021.103473Google ScholarGoogle ScholarCross RefCross Ref
  45. Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30 (Long Beach, California, USA) (NIPS 2017), Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). Curran Associates, Inc., New York, NY, USA, 4765–4774. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.htmlGoogle ScholarGoogle Scholar
  46. Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics 113, Article 103655(2021), 11 pages. https://doi.org/10.1016/j.jbi.2020.103655Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Sherin Mary Mathews. 2019. Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review. In Intelligent Computing – Proceedings of the Computing Conference (London, England, United Kingdom) (CompCom 2019), Kohei Arai, Rahul Bhatia, and Supriya Kapoor (Eds.). Springer International Publishing, Cham, CH, 1269–1292. https://doi.org/10.1007/978-3-030-22868-2_90Google ScholarGoogle ScholarCross RefCross Ref
  48. Robert N. McCauley and William Bechtel. 2001. Explanatory pluralism and heuristic identity theory. Theory & Psychology 11, 6 (2001), 736–760. https://doi.org/10.1177/0959354301116002Google ScholarGoogle ScholarCross RefCross Ref
  49. John A. McDermid, Yan Jia, Zoe Porter, and Ibrahim Habli. 2021. Artificial Intelligence Explainability: The Technical and Ethical Dimensions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, 2207, Article 20200363 (2021), 18 pages. https://doi.org/10.1098/rsta.2020.0363Google ScholarGoogle ScholarCross RefCross Ref
  50. Ann L McGill and Jill G Klein. 1993. Contrastive and counterfactual reasoning in causal judgment. Journal of Personality and Social Psychology 64, 6(1993), 897–905. https://doi.org/10.1037/0022-3514.64.6.897Google ScholarGoogle ScholarCross RefCross Ref
  51. Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007Google ScholarGoogle ScholarCross RefCross Ref
  52. Tim Miller. 2021. Contrastive explanation: A structural-model approach. The Knowledge Engineering Review 36, Article e14(2021), 22 pages. https://doi.org/10.1017/S0269888921000102Google ScholarGoogle ScholarCross RefCross Ref
  53. Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum. Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. In Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (Melbourne, Australia) (IJCAI XAI 2017), David W. Aha, Trevor Darrell, Michael Pazzani, Darryn Reid, Claude Sammut, and Peter Stone (Eds.). 36–42. arXiv:1712.00547Google ScholarGoogle Scholar
  54. Dang Minh, H. Xiang Wang, Y. Fen Li, and Tan N. Nguyen. 2021. Explainable Artificial Intelligence: A Comprehensive Review. https://doi.org/10.1007/s10462-021-10088-y Online First in Artificial Intelligence Review.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Heimo Müller, Michaela Kargl, Markus Plass, Bettina Kipperer, Luka Brcic, Peter Regitnig, Christian Geißler, Tobias Küster, Norman Zerbe, and Andreas Holzinger. 2022. Towards a Taxonomy for Explainable AI in Computational Pathology. In Humanity Driven AI: Productivity, Well-being, Sustainability and Partnership, Fang Chen and Jianlong Zhou (Eds.). Springer International Publishing, Cham, CH, Chapter 15, 311–330. https://doi.org/10.1007/978-3-030-72188-6_15Google ScholarGoogle ScholarCross RefCross Ref
  56. Ingrid Nunes and Dietmar Jannach. 2017. A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems. User Modeling and User-Adapted Interaction 27, 3–5 (2017), 393–444. https://doi.org/10.1007/s11257-017-9195-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Andrés Páez. 2019. The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds & Machines 29, 3 (2019), 441–459. https://doi.org/10.1007/s11023-019-09502-wGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  58. Arjun Panesar. 2019. Ethics of Intelligence. In Machine Learning and AI for Healthcare: Big Data for Improved Health Outcomes, Arjun Panesar (Ed.). Apress, Berkeley, CA, USA, 207–254. https://doi.org/10.1007/978-1-4842-3799-1_6Google ScholarGoogle ScholarCross RefCross Ref
  59. Wolter Pieters. 2011. Explanation and trust: what to tell the user in security and AI?Ethics and Information Technology 13, 1 (2011), 53–64. https://doi.org/10.1007/s10676-010-9253-3Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD 2016), Charu Aggarwal, Balaji Krishnapuram, Rajeev Rastogi, Dou Shen, Mohak Shah, and Alex Smola (Eds.). Association for Computing Machinery, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. David-Hillel Ruben. 2015. Explaining Explanation. Routledge, New York, NY, USA. https://doi.org/10.4324/9781315634739Google ScholarGoogle ScholarCross RefCross Ref
  62. Cynthia Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence 1, 5 (2019), 206–215. https://doi.org/10.1038/s42256-019-0048-xGoogle ScholarGoogle ScholarCross RefCross Ref
  63. Wojciech Samek and Klaus-Robert Müller. 2019. Towards Explainable Artificial Intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller (Eds.). Springer International Publishing, Cham, CH, Chapter 1, 5–22. https://doi.org/10.1007/978-3-030-28954-6_1Google ScholarGoogle ScholarCross RefCross Ref
  64. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features Through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning (Sydney, Australia) (ICML 2017), Doina Precup and Yee Whye Teh (Eds.). Microtome Publishing, Brookline, MA, USA, 3145–3153. http://proceedings.mlr.press/v70/shrikumar17a.htmlGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  65. Kacper Sokol and Peter Flach. 2020. Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* 2020), Mireille Hildebrandt, Carlos Castillo, L. Elisa Celis, Salvatore Ruggieri, Linnet Taylor, and Gabriela Zanfir-Fortuna (Eds.). Association for Computing Machinery, New York, NY, USA, 56–67. https://doi.org/10.1145/3351095.3372870Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Kacper Sokol and Peter A. Flach. 2017. The Role of Textualisation and Argumentation in Understanding the Machine Learning Process. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Carles Sierra (Ed.). IJCAI Organization, Santa Clara, USA, 5211–5212. https://doi.org/10.24963/ijcai.2017/765Google ScholarGoogle ScholarCross RefCross Ref
  67. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for Simplicity: The All Convolutional Net. In Proceedings of the 3rd International Conference on Learning Representations Workshop Track (San Diego, California, USA) (ICLR WT 2015), Yoshua Bengio and Yann LeCun (Eds.). 1–14. arXiv:1412.6806Google ScholarGoogle Scholar
  68. Sarah Sterz, Kevin Baum, Anne Lauber-Rönsberg, and Holger Hermanns. 2021. Towards Perspicuity Requirements. In 29th IEEE International Requirements Engineering Conference Workshops (Notre Dame, Indiana, USA) (REW 2021), Tao Yue and Mehdi Mirakhorli (Eds.). IEEE, Piscataway, NJ, USA, 159–163. https://doi.org/10.1109/REW53955.2021.00029Google ScholarGoogle ScholarCross RefCross Ref
  69. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (Sydney, New South Wales, Australia) (ICML 2017), Tony Jebara, Doina Precup, and Yee Whye Teh (Eds.). Microtome Publishing, Brookline, MA, USA, 3319–3328. http://proceedings.mlr.press/v70/sundararajan17a.htmlGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  70. Erico Tjoa and Cuntai Guan. 2021. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32, 11(2021), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314Google ScholarGoogle ScholarCross RefCross Ref
  71. Donald M. Truxillo, Todd E. Bodner, Marilena Bertolino, Talya N. Bauer, and Clayton A. Yonce. 2009. Effects of Explanations on Applicant Reactions: A meta-analytic review. International Journal of Selection and Assessment 17, 4(2009), 346–361. https://doi.org/10.1111/j.1468-2389.2009.00478.xGoogle ScholarGoogle ScholarCross RefCross Ref
  72. Giulia Vilone and Luca Longo. 2021. Classification of Explainable Artificial Intelligence Methods through Their Output Formats. Machine Learning and Knowledge Extraction 3, 3 (2021), 615–661. https://doi.org/10.3390/make3030032Google ScholarGoogle ScholarCross RefCross Ref
  73. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2017), 841–887. https://doi.org/10.2139/ssrn.3063289Google ScholarGoogle ScholarCross RefCross Ref
  74. Jianlong Zhou, Fang Chen, and Andreas Holzinger. 2022. Towards Explainability for AI Fairness. In xxAI – Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek (Eds.). Springer International Publishing, Cham, CH, Chapter 18, 375–386. https://doi.org/10.1007/978-3-031-04083-2_18Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Jianlong Zhou, Amir H. Gandomi, Fang Chen, and Andreas Holzinger. 2021. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 10, 5, Article 593 (2021), 19 pages. https://doi.org/10.3390/electronics10050593Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
        June 2022
        2351 pages
        ISBN:9781450393522
        DOI:10.1145/3531146

        Copyright © 2022 Owner/Author

        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 20 June 2022

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format