skip to main content
10.1145/3461778.3462131acmconferencesArticle/Chapter ViewAbstractPublication PagesdisConference Proceedingsconference-collections
research-article

Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle

Published:28 June 2021Publication History

ABSTRACT

The interpretability or explainability of AI systems (XAI) has been a topic gaining renewed attention in recent years across AI and HCI communities. Recent work has drawn attention to the emergent explainability requirements of in situ, applied projects, yet further exploratory work is needed to more fully understand this space. This paper investigates applied AI projects and reports on a qualitative interview study of individuals working on AI projects at a large technology and consulting company. Presenting an empirical understanding of the range of stakeholders in industrial AI projects, this paper also draws out the emergent explainability practices that arise as these projects unfold, highlighting the range of explanation audiences (who), as well as how their explainability needs evolve across the AI project lifecycle (when). We discuss the importance of adopting a sociotechnical lens in designing AI systems, noting how the “AI lifecycle” can serve as a design metaphor to further the XAI design field.

References

  1. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376615Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160.Google ScholarGoogle ScholarCross RefCross Ref
  4. Malika Aubakirova and Mohit Bansal. 2016. Interpreting Neural Networks to Improve Politeness Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2035–2041. https://doi.org/10.18653/v1/D16-1216Google ScholarGoogle ScholarCross RefCross Ref
  5. Taina Bucher. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society 20, 1 (2017), 30–44. https://doi.org/10.1080/1369118X.2016.1154086Google ScholarGoogle ScholarCross RefCross Ref
  6. Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512. https://doi.org/10.1177/2053951715622512Google ScholarGoogle ScholarCross RefCross Ref
  7. Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. ”Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 104 (Nov. 2019), 24 pages. https://doi.org/10.1145/3359206Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 1721–1730. https://doi.org/10.1145/2783258.2788613Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mark G. Core, H. Chad Lane, Michael van Lent, Dave Gomboc, Steve Solomon, and Milton Rosenberg. 2006. Building Explainable Artificial Intelligence Systems. In Proceedings of the 18th Conference on Innovative Applications of Artificial Intelligence - Volume 2 (Boston, Massachusetts) (IAAI’06). AAAI Press, 1766–1773.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. 447–459.Google ScholarGoogle Scholar
  11. Shipi Dhanorkar, Christine T Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2020. Tutorial on Explainability for Natural Language Processing. Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing(2020).Google ScholarGoogle Scholar
  12. Graham Dove and Anne-Laure Fayard. 2020. Monsters, Metaphors, and Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3313831.3376275Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence, Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Springer International Publishing, Cham, 449–466.Google ScholarGoogle Scholar
  14. Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl. 2019. Automated Rationale Generation: A Technique for Explainable AI and Its Effects on Human Perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 263–274. https://doi.org/10.1145/3301275.3302316Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300724Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Alfred Gell. 1988. Technology and magic. Anthropology Today 4, 2 (1988), 6–9.Google ScholarGoogle ScholarCross RefCross Ref
  17. Krista J. Gile and Mark S. Handcock. 2010. 7. Respondent-Driven Sampling: An Assessment of Current Methodology. Sociological Methodology 40, 1 (May 2010), 285–327. https://doi.org/10.1111/j.1467-9531.2010.01223.xGoogle ScholarGoogle ScholarCross RefCross Ref
  18. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.Google ScholarGoogle ScholarCross RefCross Ref
  19. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.Google ScholarGoogle Scholar
  20. Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (Philadelphia, Pennsylvania, USA) (CSCW ’00). Association for Computing Machinery, New York, NY, USA, 241–250. https://doi.org/10.1145/358916.358995Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Michael Hind. 2019. Explaining explainable AI. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (2019), 16–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Karen Holtzblatt, Jessamyn Wendell, and Shelley Wood. 2004. Rapid Contextual Design. Morgan Kaufmann. https://www.elsevier.com/books/rapid-contextual-design/holtzblatt/978-0-12-354051-5Google ScholarGoogle Scholar
  23. Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proc. ACM Hum.-Comput. Interact. 4, CSCW1, Article 068 (May 2020), 26 pages. https://doi.org/10.1145/3392878Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Edwin Hutchins. 1995. Cognition in the Wild. Number 1995. MIT press.Google ScholarGoogle Scholar
  25. Ashwin Ittoo, Le Minh Nguyen, and Antal van den Bosch. 2016. Text analytics in industry: Challenges, desiderata and trends. Computers in Industry 78 (May 2016), 96–107. https://doi.org/10.1016/j.compind.2015.12.001Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. W. Lewis Johnson. 1994. Agents That Learn to Explain Themselves. In Proceedings of the Twelfth AAAI National Conference on Artificial Intelligence (Seattle, Washington) (AAAI’94). AAAI Press, 1257–1263.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jesse Josua Benjamin, Arne Berger, Nick Merrill, and James Pierce. 2021. Machine Learning Uncertainty as a Design Material:A Post-Phenomenological Inquiry. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and Understanding Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, 681–691. https://doi.org/10.18653/v1/N16-1082Google ScholarGoogle ScholarCross RefCross Ref
  29. Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3313831.3376727Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Aale Luusua and Johanna Ylipulli. 2020. Artificial Intelligence and Risk in Design. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). Association for Computing Machinery, New York, NY, USA, 1235–1244. https://doi.org/10.1145/3357236.3395491Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Nirav Malsattar, Tomo Kihara, and Elisa Giaccardi. 2019. Designing and Prototyping from the Perspective of AI in the Wild. In Proceedings of the 2019 on Designing Interactive Systems Conference (San Diego, CA, USA) (DIS ’19). Association for Computing Machinery, New York, NY, USA, 1083–1088. https://doi.org/10.1145/3322276.3322351Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007Google ScholarGoogle ScholarCross RefCross Ref
  34. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://doi.org/10.1145/3287560.3287596Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 279–288. https://doi.org/10.1145/3287560.3287574Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Bonnie A Nardi. 1996. Context and Consciousness: Activity Theory and Human-Computer Interaction. MIT Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Judith S. Olson and Wendy A. Kellogg. 2014. Ways of Knowing in HCI. Springer, New York, NY, New York, NY, USA. https://doi.org/10.1007/978-1-4939-0378-8Google ScholarGoogle Scholar
  38. Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–52. https://doi.org/10.1145/3411764.3445315Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Emilee Rader and Rebecca Gray. 2015. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 173–182. https://doi.org/10.1145/2702123.2702174Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Anuradha Reddy, Iohanna Nicenboim, James Pierce, and Elisa Giaccardi. 2020. Encountering ethics through design: a workshop with nonhuman participants. AI & SOCIETY (2020), 1–9. https://doi.org/10.1007/s00146-020-01088-7Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD ’16). Association for Computing Machinery, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. 2662–2670. https://doi.org/10.24963/ijcai.2017/371Google ScholarGoogle ScholarCross RefCross Ref
  43. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206–215. https://doi.org/10.1038/s42256-019-0048-xGoogle ScholarGoogle ScholarCross RefCross Ref
  44. Aaron Springer and Steve Whittaker. 2019. Progressive Disclosure: Empirically Motivated Approaches to Designing Effective Transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 107–120. https://doi.org/10.1145/3301275.3302322Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. William A Stahl. 1995. Venerating the black box: Magic in media discourse on technology. Science, Technology, & Human Values 20, 2 (1995), 234–258.Google ScholarGoogle ScholarCross RefCross Ref
  46. Lucy A. Suchman. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Lucy A Suchman. 2007. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. William R Swartout. 1983. XPLAIN: A system for creating and explaining expert consulting programs. Artificial intelligence 21, 3 (1983), 285–325.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research 9, 86 (2008), 2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.htmlGoogle ScholarGoogle Scholar
  50. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174014Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Christine T. Wolf. 2019. Explainability Scenarios: Towards Scenario-Based XAI Design. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 252–257. https://doi.org/10.1145/3301275.3302317Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Christine T. Wolf, Haiyi Zhu, Julia Bullard, Min Kyung Lee, and Jed R. Brubaker. 2018. The Changing Contours of ”Participation” in Data-Driven, Algorithmic Ecosystems: Challenges, Tactics, and an Agenda. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing (Jersey City, NJ, USA) (CSCW ’18). Association for Computing Machinery, New York, NY, USA, 377–384. https://doi.org/10.1145/3272973.3273005Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Qian Yang, Nikola Banovic, and John Zimmerman. 2018. Mapping Machine Learning Advances from HCI Research to Reveal Starting Places for Design Innovation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3173704Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018. Investigating How Experienced UX Designers Effectively Work with Machine Learning. In Proceedings of the 2018 Designing Interactive Systems Conference (Hong Kong, China) (DIS ’18). Association for Computing Machinery, New York, NY, USA, 585–596. https://doi.org/10.1145/3196709.3196730Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300509Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Bowen Yu, Ye Yuan, Loren Terveen, Zhiwei Steven Wu, Jodi Forlizzi, and Haiyi Zhu. 2020. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-Offs Across Multiple Objectives. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). Association for Computing Machinery, New York, NY, USA, 1245–1257. https://doi.org/10.1145/3357236.3395528Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DIS '21: Proceedings of the 2021 ACM Designing Interactive Systems Conference
    June 2021
    2082 pages
    ISBN:9781450384766
    DOI:10.1145/3461778

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 28 June 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,158of4,684submissions,25%

    Upcoming Conference

    DIS '24
    Designing Interactive Systems Conference
    July 1 - 5, 2024
    IT University of Copenhagen , Denmark

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format