ABSTRACT
Regular eXplainable AI (XAI) approaches are often ineffective in supporting decision-makers across domains. In some instances, it can even lead to automation bias or algorithmic aversion or would simply be ignored as a redundant feature. Based on cognitive psychology literature we outline a strategy for how XAI interface design could be tailored to have a long-lasting educational value. We suggest the features that could support domain-related and technical skills development this way narrowing the digital divide between “new” and “old” experts. Lastly, we suggest an intermitted explainability approach that could help to find a balance between seamless and cognitively engaging explanations.
- Asbjørn W. A. Flügge, Thomas Hildebrandt, and Naja H. Møller. 2020. Algorithmic decision making in public services: A CSCW-perspective. In Companion of the 2020 ACM International Conference on Supporting Group Work, January 06, 2020, 111-114. https://doi.org/10.1145/3323994.3369886Google ScholarDigital Library
- Nicholas Diakopoulos. 2019. Automating the news: How algorithms are rewriting the media. Harvard University Press, Cambridge, Massachusetts.Google Scholar
- Anna B. Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, 1–12. https://doi.org/10.1145/3290605.3300271Google ScholarDigital Library
- Andrew D. Selbst, Andrew, and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham L. Rev 87 (2018), 1085.Google Scholar
- Alejandro B. Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion 58 (2020), 82–115. https://doi. org/10.1016/j.inffus.2019.12.012Google ScholarDigital Library
- Min K. Lee, Ji T. Kim, and Leah Lizarondo. 2017. A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3365–3376. https://doi.org/10.1145/3025453.3025884Google ScholarDigital Library
- Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Bashmira Nushi, Ece Kamar 2021. Does the whole exceed its parts? The effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, 1–16. https://doi.org/10.1145/3411764.3445717Google ScholarDigital Library
- Juergen Sauer, Alain Chavaillaz, and David Wastell. 2016. Experience of automation failures in training: Effects on trust, automation bias, complacency and performance. Ergonomics 59, 6, 767–780. https://doi.org/10.1080/00140139.2015.1094577Google ScholarCross Ref
- Linda J. Skitka, Kathleen L. Mosier, and Mark Burdick. 1999. Does automation bias decision-making? International Journal of Human-Computer Studies 51, 5, 991–1006. https://doi.org/ 10.1006/ijhc.1999.0252Google ScholarDigital Library
- Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics, IEEE, 160-169.Google ScholarDigital Library
- Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General 144, 1, 114. https://doi.org/10.1037/xge0000033Google Scholar
- Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems, 2018, 1-14. https://doi.org/10.1145/3173574.3174014Google ScholarDigital Library
- Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, 1-12. https://doi.org/10.1145/3313831.3376638Google ScholarDigital Library
- Mohammad Naiseh, Reem S. Al-Mansoori, Dena Al-Thani, Nan Jiang, and Raian Ali. 2021. Nudging through Friction: an Approach for Calibrating Trust in Explainable AI. In 2021 8th International Conference on Behavioral and Social Computing (BESC), 2021, 1-5.Google ScholarCross Ref
- Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In 23rd international conference on intelligent user interfaces, 2018, 211-223. https://doi.org/10.1145/3172944.3172961Google ScholarDigital Library
- Sophie van Baalen, Mieke Boon, and Petra Verhoef. 2021. From clinical decision support to clinical reasoning support systems. Journal of evaluation in clinical practice 27, 3, (2021), 520-528. doi: 10.1111/jep.13541Google ScholarCross Ref
- Allison Woodruff, Yasmin Asare Anderson, Katherine Jameson Armstrong, Marina Gkiza, Jay Jennings, Christopher Moessner, Fernanda Viegas, Martin Wattenberg, Fabian Wrede, and Patrick Gage Kelley. 2020. A cold, technical decision-maker: Can AI provide explainability, negotiability, and humanity?.arXiv preprint arXiv:2012.00874 (2020).Google Scholar
- Lingxue Yang, Hongrun Wang, and Léa A. Deleris. 2021. What Does It Mean to Explain? A User-Centered Study on AI Explainability. In International Conference on Human-Computer Interaction, 2021, 107-121. https://doi.org/10.1007/978-3-030-77772-2_8Google ScholarDigital Library
- Ben Green, and Yiling Chen. Disparate interactions: an algorithm-in-the-loop analysis of fairness in risk assessments. Proc. In 2019 Conference on Fairness, Accountability and Transparency, January 2019, 90-99. https://doi.org/10.1145/3287560.3287563Google ScholarDigital Library
- Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry 11, 1, (2021), 1-9.Google Scholar
- Ming Yin, Wortman Vaughan, J. & Wallach, H. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceeding of the 2019 CHI Conference on Human Factors in Computing Systems. 2019, 1-12. https://doi.org/10.1145/ 3290605.3300509Google ScholarDigital Library
- Max Schemmer, Patrick Hemmer, Maximilian Nitsche, Niklas Kühl, and Michael Vössing. 2022. A Meta-Analysis on the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making. arXiv preprint arXiv:2205.05126Google Scholar
- Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee, and André Freitas. 2022. Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making." arXiv preprint arXiv:2204.05030Google Scholar
- Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems, 2018, 1–18. https://doi.org/10.1145/3173574.3174156Google ScholarDigital Library
- Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, 648–657. https://doi.org/10.1145/ 3351095.3375624Google ScholarDigital Library
- Max Schemmer, Niklas Kühl, and Gerhard Satzger. 2021. Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence. arXiv preprint arXiv:2109.13827Google Scholar
- Itamar Simonson, Ziv Carmon, and Suzanne O'curry. 1994. Experimental evidence on the negative effect of product features and sales promotions on brand choice. Marketing science 13, 1, (1994), 23-40.Google Scholar
- Paolo Legrenzi, Vittorio Girotto, and Philip N. Johnson-Laird. 1993. Focussing in reasoning and decision making. Cognition 49, 1-2, (1993), 37-66.Google ScholarCross Ref
- Andrew P. Sage. 1981. Behavioral and organizational considerations in the design of information systems and processes for planning and decision support. IEEE Transactions on Systems, Man, and Cybernetics 11, 9, (1981), 640-678.Google ScholarCross Ref
- James Schaffer, John O'Donovan, James Michaelis, Adrienne Raglin, and Tobias Höllerer. 2019. I can do better than your AI: expertise and explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, 240-251.Google ScholarDigital Library
- Corvandar Gilvary, Neel Madhukar, Jamal Elkhader, and Olivier Elemento. 2019. The missing pieces of artificial intelligence in medicine. Trends in pharmacological sciences 40, 8 (2019), 555-564.Google Scholar
- Rojo, Diego, Nyi Nyi Htun, Denis Parra, Robin De Croon, and Katrien Verbert. 2021. AHMoSe: A knowledge-based visual support system for selecting regression machine learning models. Computers and Electronics in Agriculture 187 (2021), 106183.Google ScholarDigital Library
- Sarah Bayer, Henner Gimpel, and Moritz Markgraf. 2021. The role of domain expertise in trusting and following explainable AI decision support systems. Journal of Decision Systems (2021), 1-29.Google Scholar
- Guoyan Li, Chenxi Yuan, Sagar Kamarthi, Mohsen Moghaddam, and Xiaoning Jin. 2021. Data science skills and domain knowledge requirements in the manufacturing industry: A gap analysis. Journal of Manufacturing Systems 60 (2021), 692-706.Google ScholarCross Ref
- Bronwyn Jones, and Ewa Luger. 2021. AI and Journalism-Intelligible Cloud and Edge AI (ICE-AI).Google Scholar
- He Zhang, and Haichao Wang. 2020. Information skills and literacy in investigative journalism in the social media era. Journal of Information Science (2022), 01655515221094442.Google Scholar
- André Renz, and Romy Hilbig. 2020. Prerequisites for artificial intelligence in further education: identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education 17, 1 (2020), 1-21.Google ScholarCross Ref
- Gary Klein, and Valerie M. Chase. 1998. Sources of power: How people make decisions. Nature 392, 6673 (1998), 242. -242.Google Scholar
- Huber L. Dreyfus, and Stuart E. Dreyfus. 1986. From Socrates to expert systems: The limits of calculative rationality. In Philosophy and technology II, 111-130. Springer, Dordrecht.Google Scholar
- Robert J.B. Hutton, and Gary Klein. 1999. Expert decision making. Systems Engineering: The Journal of The International Council on Systems Engineering 2, 1 (1999), 32-45.Google ScholarCross Ref
- Kevin A. Hoff, and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407-434.Google ScholarCross Ref
- Ella Glikson, and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627-660.Google ScholarCross Ref
- Jens Rasmussen. 1993. Deciding and doing: decision making in natural context. In Decision Making in Action: Models and Methods. Ablex Publishing, 1993.Google Scholar
- Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW, 2021, 1-21.Google Scholar
- Kate Goddard, Abdul Roudsari, and Jeremy C. Wyatt. 2014. Automation bias: empirical results assessing influencing factors. International journal of medical informatics 83, 5 (2014), 368-375.Google Scholar
- Richard M. Shiffrin, and Walter Schneider. 1977. Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological review 84, 2 (1977), 127.Google Scholar
- Kathryn A. Lambe, Gary O'Reilly, Brendan D. Kelly, and Sarah Curristan. 2016. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ quality & safety 25, 10 (2016), 808-820.Google Scholar
- Pat Croskerry. 2003. Cognitive forcing strategies in clinical decisionmaking. Annals of emergency medicine 41, 1 (2003), 110-120.Google ScholarCross Ref
- Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces, 807-819.Google ScholarDigital Library
- Dongxiao Gu, Wang Zhao, Yi Xie, Xiaoyu Wang, Kaixiang Su, and Oleg V. Zolotarev. 2021. A Personalized Medical Decision Support System Based on Explainable Machine Learning Algorithms and ECC Features: Data from the Real World. Diagnostics 11, 9 (2021), 1677.Google ScholarCross Ref
- Kyle Martin, Anne Liret, Nirmalie Wiratunga, Gilbert Owusu, and Mathias Kern. 2019. Developing a catalogue of explainability methods to support expert and non-expert users. In International Conference on Innovative Techniques and Applications of Artificial Intelligence, 309-324. Springer, Cham.Google ScholarDigital Library
- Patrick Baudisch, Nathaniel Good, Victoria Bellotti, and Pamela Schraedley. 2002. Keeping things in context: a comparative evaluation of focus plus context screens, overviews, and zooming. In Proceedings of the SIGCHI conference on Human factors in computing systems, 2002, 259-266.Google ScholarDigital Library
- Scott M. Lundberg, and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google Scholar
- Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. Vice: Visual counterfactual explanations for machine learning models. In Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, 531-535.Google ScholarDigital Library
- Yolanda Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, Daniel Garijo, Shruti Gadewar, Qifan Yang, and Neda Jahanshad. 2019. Towards human-guided machine learning. In Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, 614-624.Google ScholarDigital Library
- Yadin Dudai, Avi Karni, and Jan Born. 2015. The consolidation and transformation of memory. Neuron 88, 1 (2015), 20-32.Google ScholarCross Ref
- Gary Klein, 2008. Naturalistic decision-making. Human factors 50, 3 (2008), 456-460.Google Scholar
- Hillel J. Einhorn, and Robin M. Hogarth. 1981. Behavioral decision theory: Processes of judgement and choice. Annual review of psychology 32, 1 (1981), 53-88.Google Scholar
- Gary Klein, and Beth W. Crandall. 1995. The role of mental simulation in naturalistic decision making. Local applications of the ecological approach to human-machine systems 2 (1995), 324-358.Google Scholar
- Jennifer K. Phillips, and D. A. Battaglia. 2003. Instructional methods for training sensemaking skills. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference. Orlando, FL: National Training Systems Association, 2003.Google Scholar
- Jennifer K. Phillips, Gary Klein, and Winston R. Sieck. 2004. Expertise in judgment and decision making: A case for training intuitive decision skills. Blackwell handbook of judgment and decision making 297 (2004), 315.Google Scholar
- Robert R. Hoffman, Beth Crandall, and Nigel Shadbolt. 1998. Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human factors 40, 2 (1998), 254-276.Google Scholar
- Laura G. Militello and Gary Klein. 2013. Decision-centered design. The Oxford handbook of cognitive engineering (2013), 261-271.Google Scholar
- Joshua Klayman, and Kaye Brown. 1993. Debias the environment instead of the judge: An alternative approach to reducing error in diagnostic (and other) judgment. Cognition 49, 1-2 (1993), 97-122.Google ScholarCross Ref
- Philip N. Johnson-Laird, and Eldar Shafir. 1993. The interaction between reasoning and decision making: An introduction. Cognition 49, 1-2 (1993), 1-9.Google ScholarCross Ref
- Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. Interpreting interpretability: understanding data scientists' use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, 1-14. https://doi.org/10.1145/3313831.3376219Google ScholarDigital Library
- Ross, K. G., D. Battaglia, J. Phillips, E. A. Domeshek, and J. W. Lussier. 2003. Mental models underlying tactical thinking skills. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference.Google Scholar
- Eduardo Salas, Katherine A. Wilson, C. Shawn Burke, and Clint A. Bowers. 2002. Myths about crew resource management training. Ergonomics in Design 10, 4 (2002), 20-24.Google ScholarCross Ref
- Walter Schneider. 1985. Training high-performance skills: Fallacies and guidelines. Human factors 27, 3 (1985), 285-300.Google Scholar
- Joshua Klayman. 1988. On the how and why (not) of learning from outcomes. Advances in psychology 54, (1988), 115-162.Google Scholar
- William K. Balzer, and Michael E. Doherty. Effects of cognitive feedback on performance. Psychological bulletin 106, 3 (1989), 410.Google Scholar
- Paul J. Hoffman, Timothy C. Earle, and Paul Slovic. 1981. Multidimensional functional learning (MFL) and some new conceptions of feedback. Organizational behavior and Human performance 27, 1 (1981), 75-102.Google Scholar
- Gary Klein, Brian Moon, and Robert R. Hoffman. 2006. Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent systems 21, 5 (2006), 88-92.Google Scholar
- Rebecca M. Pliske, Michael J. McCloskey, and Gary Klein. 2001. Decision skills training: Facilitating learning from experience. In Naturalistic Decision Making, May 4, 1998, Airlie Center, Warrenton, VA, US. Lawrence Erlbaum Associates Publishers, 2001.Google Scholar
- Rebecca M. Pliske, Beth Crandall, and Gary Klein. 2004. Competence in weather forecasting. Psychological investigations of competence in decision making 40, (2004), 68.Google Scholar
- Benjamin Höferlin, Rudolf Netzel, Markus Höferlin, Daniel Weiskopf, and Gunther Heidemann. 2012. Inter-active learning of ad-hoc classifiers for video visual analytics. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), 23-32.Google ScholarDigital Library
- Dominik Sacha, Michael Sedlmair, Leishi Zhang, John A. Lee, Jaakko Peltonen, Daniel Weiskopf, Stephen C. North, and Daniel A. Keim. 2017. What you see is what you can change: Human-centered machine learning by interactive visualization. Neurocomputing 268 (2017), 164-175.Google ScholarDigital Library
- Marko Bohanec, Marko Robnik-Šikonja, and Mirjana Kljajić Borštnar. 2017. Decision-making framework with double-loop learning through interpretable black-box machine learning models. Industrial Management & Data Systems 117, 7 (2017), 1389-1406.Google ScholarCross Ref
- Donald A. Norman. 1999. Affordance, conventions, and design. Interactions, 6, 3 (1999), 38-43.Google ScholarDigital Library
- Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans, and Rhianne Jones. 2021. Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable. Journal of Responsible Technology 7 (2021),100017.Google ScholarCross Ref
- Tjeerd A.J. Schoonderwoerd, Wiard Jorritsma, Mark A. Neerincx, and Karel Van Den Bosch. 2021. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies 154 (2021), 102684.Google ScholarDigital Library
- Ewart J. de Visser, Marieke MM Peeters, Malte F. Jung, Spencer Kohn, Tyler H. Shaw, Richard Pak, and Mark A. Neerincx. 2020. Towards a theory of longitudinal trust calibration in human-robot teams. International journal of social robotics 12, 2 (2020), 459-478.Google Scholar
- John Zerilli, Umang Bhatt, and Adrian Weller. 2022. How transparency modulates trust in artificial intelligence. Patterns 3, 4 (2022), 100455.Google ScholarCross Ref
Recommendations
Decision analysis of e-learning in bridging digital divide for education dissemination
AbstractThe research presented aims to analyze the process of decision making regarding the use of e-learning. The digital divide has been the main consideration, because of the lack of access to the digital devices and services. E-learning plays a vital ...
Conceptualizing and Testing a Social Cognitive Model of the Digital Divide
The digital divide has loomed as a public policy issue for over a decade. Yet, a theoretical account for the effects of the digital divide is currently lacking. This study examines three levels of the digital divide. The digital access divide (the first-...
The Digital Divide and Its Influence on Public Education Diffusion
It is evident that information and communication technologies ICTs have improved performance and efficiency for different types of organizations. One of the important applications of ICT in public and private businesses is related to education, where ...
Comments