ABSTRACT
While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an on-going resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system.
- Jessie Y. C. Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael J. Barnes. 2014. Situation Awareness-Based Agent Transparency . US Army Research Laboratory (2014).Google Scholar
- Maartje M A De Graaf and Bertram F Malle. 2017. How People Explain Action (and Autonomous Intelligent Systems Should Too). In AAAI 2017 Fall Symposium on AI-HRI. 19--26.Google Scholar
- David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA) (2017).Google Scholar
- Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, Vol. 57, 3 (2015), 407--434.Google ScholarCross Ref
- Joshua D. Hoffman, Michael J. Patterson, John D. Lee, Zachariah B. Crittendon, Heather A. Stoner, Bobbie D. Seppelt, and Michael P. Linegang. 2006. Human-Automation Collaboration in Dynamic Mission Planning: A Challenge Requiring an Ecological Approach . Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 50, 23 (2006), 2482--2486.Google Scholar
- Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, Vol. 4, 1 (2000), 53--71.Google ScholarCross Ref
- Thomas Kisler, Uwe Reichel, and Florian Schiel. 2017. Multilingual processing of speech via web services. Computer Speech & Language, Vol. 45 (2017), 326 -- 347. Google ScholarDigital Library
- H Chad Lane, Mark G Core, Michael Van Lent, Steve Solomon, and Dave Gomboc. 2005. Explainable artificial intelligence for training and tutoring . Technical Report. University of Southern California/Institute for Creative Technologies. https://apps.dtic.mil/dtic/tr/fulltext/u2/a459148.pdfGoogle Scholar
- Joseph E. Mercado, Michael A. Rupp, Jessie Y.C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management . Human Factors, Vol. 58, 3 (2016), 401--415.Google ScholarCross Ref
- Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum . IJCAI International Joint Conference on Artificial Intelligence (2017). arxiv: 1712.00547Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . ACM, 1135--1144. Google ScholarDigital Library
- Tara N Sainath and Carolina Parada. 2015. Convolutional Neural Networks for Small-footprint Keyword Spotting. In Proceedings of Interspeech, 2015. 1478--1482. https://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdfGoogle Scholar
- K. Stubbs, P. J. Hinds, and D. Wettergreen. 2007. Autonomy and Common Ground in Human-Robot Interaction: A Field Study. IEEE Intelligent Systems, Vol. 22, 2 (2007), 42--50. Google ScholarDigital Library
- Susanne Van Mulken, Elisabeth André, and Jochen Müller. 1999. An empirical study on the trustworthiness of life-like interface agents. In Human-Computer Interaction: Communication, Cooperation, and Application Design, Proceedings of 8th International Conference on Human-Computer Interaction, 1999 . 152--156. Google ScholarDigital Library
- Pete Warden. 2018. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition . (2018). arxiv: 1804.03209v1 https://arxiv.org/pdf/1804.03209.pdfGoogle Scholar
- Katharina Weitz, Teena Hassan, Ute Schmid, and Jens Garbas. 2018. Towards Explaining Deep Learning Networks to Distinguish Facial Expressions of Pain and Emotions. In Forum Bildverarbeitung 2018. KIT Scientific Publishing, 197--208.Google Scholar
Index Terms
- "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design
Recommendations
How Explainability Contributes to Trust in AI
FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and TransparencyWe provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This ...
Explainable artificial intelligence: a comprehensive review
AbstractThanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered ...
A Comprehensive Survey of Explainable Artificial Intelligence (XAI) Methods: Exploring Transparency and Interpretability
Web Information Systems Engineering – WISE 2023AbstractArtificial Intelligence (AI) is undergoing a significant transformation. In recent years, the deployment of AI models, from Analytical to Cognitive and Generative AI, has become imminent; however, the widespread utilization of these models has ...
Comments