skip to main content
10.1145/3308532.3329441acmconferencesArticle/Chapter ViewAbstractPublication PagesivaConference Proceedingsconference-collections
extended-abstract

"Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design

Published:01 July 2019Publication History

ABSTRACT

While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an on-going resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system.

References

  1. Jessie Y. C. Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael J. Barnes. 2014. Situation Awareness-Based Agent Transparency . US Army Research Laboratory (2014).Google ScholarGoogle Scholar
  2. Maartje M A De Graaf and Bertram F Malle. 2017. How People Explain Action (and Autonomous Intelligent Systems Should Too). In AAAI 2017 Fall Symposium on AI-HRI. 19--26.Google ScholarGoogle Scholar
  3. David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA) (2017).Google ScholarGoogle Scholar
  4. Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, Vol. 57, 3 (2015), 407--434.Google ScholarGoogle ScholarCross RefCross Ref
  5. Joshua D. Hoffman, Michael J. Patterson, John D. Lee, Zachariah B. Crittendon, Heather A. Stoner, Bobbie D. Seppelt, and Michael P. Linegang. 2006. Human-Automation Collaboration in Dynamic Mission Planning: A Challenge Requiring an Ecological Approach . Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 50, 23 (2006), 2482--2486.Google ScholarGoogle Scholar
  6. Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, Vol. 4, 1 (2000), 53--71.Google ScholarGoogle ScholarCross RefCross Ref
  7. Thomas Kisler, Uwe Reichel, and Florian Schiel. 2017. Multilingual processing of speech via web services. Computer Speech & Language, Vol. 45 (2017), 326 -- 347. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. H Chad Lane, Mark G Core, Michael Van Lent, Steve Solomon, and Dave Gomboc. 2005. Explainable artificial intelligence for training and tutoring . Technical Report. University of Southern California/Institute for Creative Technologies. https://apps.dtic.mil/dtic/tr/fulltext/u2/a459148.pdfGoogle ScholarGoogle Scholar
  9. Joseph E. Mercado, Michael A. Rupp, Jessie Y.C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management . Human Factors, Vol. 58, 3 (2016), 401--415.Google ScholarGoogle ScholarCross RefCross Ref
  10. Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum . IJCAI International Joint Conference on Artificial Intelligence (2017). arxiv: 1712.00547Google ScholarGoogle Scholar
  11. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . ACM, 1135--1144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Tara N Sainath and Carolina Parada. 2015. Convolutional Neural Networks for Small-footprint Keyword Spotting. In Proceedings of Interspeech, 2015. 1478--1482. https://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdfGoogle ScholarGoogle Scholar
  13. K. Stubbs, P. J. Hinds, and D. Wettergreen. 2007. Autonomy and Common Ground in Human-Robot Interaction: A Field Study. IEEE Intelligent Systems, Vol. 22, 2 (2007), 42--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Susanne Van Mulken, Elisabeth André, and Jochen Müller. 1999. An empirical study on the trustworthiness of life-like interface agents. In Human-Computer Interaction: Communication, Cooperation, and Application Design, Proceedings of 8th International Conference on Human-Computer Interaction, 1999 . 152--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Pete Warden. 2018. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition . (2018). arxiv: 1804.03209v1 https://arxiv.org/pdf/1804.03209.pdfGoogle ScholarGoogle Scholar
  16. Katharina Weitz, Teena Hassan, Ute Schmid, and Jens Garbas. 2018. Towards Explaining Deep Learning Networks to Distinguish Facial Expressions of Pain and Emotions. In Forum Bildverarbeitung 2018. KIT Scientific Publishing, 197--208.Google ScholarGoogle Scholar

Index Terms

  1. "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        IVA '19: Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents
        July 2019
        282 pages
        ISBN:9781450366724
        DOI:10.1145/3308532

        Copyright © 2019 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 July 2019

        Check for updates

        Qualifiers

        • extended-abstract

        Acceptance Rates

        IVA '19 Paper Acceptance Rate15of63submissions,24%Overall Acceptance Rate53of196submissions,27%

        Upcoming Conference

        IVA '24
        ACM International Conference on Intelligent Virtual Agents
        September 16 - 19, 2024
        GLASGOW , United Kingdom

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader