skip to main content
10.1145/3531146.3533182acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Designing for Responsible Trust in AI Systems: A Communication Perspective

Published:20 June 2022Publication History

ABSTRACT

Current literature and public discourse on “trust in AI” are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH, which describes how trustworthiness is communicated in AI systems through trustworthiness cues and how those cues are processed by people to make trust judgments. Besides AI-generated content, we highlight transparency and interaction as AI systems’ affordances that present a wide range of trustworthiness cues to users. By bringing to light the variety of users’ cognitive processes to make trust judgments and their potential limitations, we urge technology creators to make conscious decisions in choosing reliable trustworthiness cues for target users and, as an industry, to regulate this space and prevent malicious use. Towards these goals, we define the concepts of warranted trustworthiness cues and expensive trustworthiness cues, and propose a checklist of requirements to help technology creators identify appropriate cues to use. We present a hypothetical use case to illustrate how practitioners can use MATCH to design AI systems responsibly, and discuss future directions for research and industry efforts aimed at promoting responsible trust in AI.

References

  1. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6(2018), 52138–52160.Google ScholarGoogle Scholar
  2. Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6–1.Google ScholarGoogle ScholarCross RefCross Ref
  3. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Rebecca BliegeBird and EricAlden Smith. 2005. Signaling theory, strategic interaction, and symbolic capital. Current anthropology 46, 2 (2005), 221–248.Google ScholarGoogle Scholar
  5. Shelly Chaiken. 1980. Heuristic versus systematic information processing and the use of source versus message cues in persuasion.Journal of personality and social psychology 39, 5(1980), 752.Google ScholarGoogle Scholar
  6. Brian L Connelly, S Trevis Certo, R Duane Ireland, and Christopher R Reutzel. 2011. Signaling theory: A review and assessment. Journal of management 37, 1 (2011), 39–67.Google ScholarGoogle ScholarCross RefCross Ref
  7. Graham Dietz and Deanne N Den Hartog. 2006. Measuring trust inside organisations. Personnel review (2006).Google ScholarGoogle Scholar
  8. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017).Google ScholarGoogle Scholar
  9. Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, 2021. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509(2021).Google ScholarGoogle Scholar
  11. Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Gunther Eysenbach, John Powell, Oliver Kuss, and Eun-Ryoung Sa. 2002. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. Jama 287, 20 (2002), 2691–2700.Google ScholarGoogle ScholarCross RefCross Ref
  13. Andrew J Flanagin and Miriam J Metzger. 2007. The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New media & society 9, 2 (2007), 319–342.Google ScholarGoogle Scholar
  14. Brian J Fogg. 2003. Prominence-interpretation theory: Explaining how people assess credibility online. In CHI’03 extended abstracts on human factors in computing systems. 722–723.Google ScholarGoogle Scholar
  15. Brian J Fogg, Jonathan Marshall, Othman Laraki, Alex Osipovich, Chris Varma, Nicholas Fang, Jyoti Paul, Akshay Rangnekar, John Shon, Preeti Swani, 2001. What makes web sites credible? A report on a large quantitative study. In Proceedings of the SIGCHI conference on Human factors in computing systems. 61–68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Brian J Fogg, Cathy Soohoo, David R Danielson, Leslie Marable, Julianne Stanford, and Ellen R Tauber. 2003. How do users evaluate the credibility of Web sites? A study with over 2,500 participants. In Proceedings of the 2003 conference on Designing for user experiences. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3(2021), 1–28.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. James J Gibson. 1977. The theory of affordances. Hilldale, USA 1, 2 (1977), 67–82.Google ScholarGoogle Scholar
  19. Anthony Giddens. 1984. The constitution of society: Outline of the theory of structuration. Univ of California Press.Google ScholarGoogle Scholar
  20. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.Google ScholarGoogle Scholar
  21. Brian Hilligoss and Soo Young Rieh. 2008. Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management 44, 4 (2008), 1467–1484.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Michael Hind, Stephanie Houde, Jacquelyn Martino, Aleksandra Mojsilovic, David Piorkowski, John Richards, and Kush R Varshney. 2020. Experiences with improving the transparency of ai models and services. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).Google ScholarGoogle Scholar
  24. Jake M Hofman, Daniel G Goldstein, and Jessica Hullman. 2020. How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 624–635.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Monique WM Jaspers, Thiemo Steen, Cor Van Den Bos, and Maud Geenen. 2004. The think aloud method: a guide to user interface design. International journal of medical informatics 73, 11-12(2004), 781–795.Google ScholarGoogle Scholar
  27. Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan.Google ScholarGoogle Scholar
  28. Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Aniket Kittur, Bongwon Suh, and Ed H Chi. 2008. Can you ever trust a Wiki? Impacting perceived trustworthiness in Wikipedia. In Proceedings of the 2008 ACM conference on Computer supported cooperative work. 477–480.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Bran Knowles and John T Richards. 2021. The Sanction of Authority: Promoting Public Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 262–271.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let me explain: Impact of personal and impersonal explanations on trust in recommender systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv preprint arXiv:2112.11471(2021).Google ScholarGoogle Scholar
  33. John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.Google ScholarGoogle ScholarCross RefCross Ref
  34. John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.Google ScholarGoogle Scholar
  35. Q Vera Liao and Wai-Tat Fu. 2014. Age differences in credibility judgments of online health information. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 1(2014), 1–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790(2021).Google ScholarGoogle Scholar
  37. Zachary C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31–57.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.Google ScholarGoogle ScholarCross RefCross Ref
  40. Miriam J Metzger. 2007. Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American society for information science and technology 58, 13 (2007), 2078–2091.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Miriam J Metzger, Andrew J Flanagin, and Ryan B Medders. 2010. Social and heuristic approaches to credibility evaluation online. Journal of communication 60, 3 (2010), 413–439.Google ScholarGoogle ScholarCross RefCross Ref
  42. Barbara Misztal. 2013. Trust in modern societies: The search for the bases of social order. John Wiley & Sons.Google ScholarGoogle Scholar
  43. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.Google ScholarGoogle ScholarCross RefCross Ref
  45. Donald A Norman. 1988. The psychology of everyday things.Basic books.Google ScholarGoogle Scholar
  46. Daniel J O’Keefe. 2013. The elaboration likelihood model. The Sage handbook of persuasion: Developments in theory and practice (2013), 137–149.Google ScholarGoogle ScholarCross RefCross Ref
  47. Lace MK Padilla, Maia Powell, Matthew Kay, and Jessica Hullman. 2021. Uncertain about uncertainty: How qualitative expressions of forecaster confidence impact decision-making with uncertainty visualizations. Frontiers in Psychology(2021), 3747.Google ScholarGoogle Scholar
  48. Richard E Petty and John T Cacioppo. 1984. Source factors and the elaboration likelihood model of persuasion. ACR North American Advances(1984).Google ScholarGoogle Scholar
  49. Richard E Petty and John T Cacioppo. 1986. The elaboration likelihood model of persuasion. In Communication and persuasion. Springer, 1–24.Google ScholarGoogle Scholar
  50. Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Soo Young Rieh and David R Danielson. 2007. Credibility: A multidisciplinary framework. Annual review of information science and technology 41, 1(2007), 307–364.Google ScholarGoogle Scholar
  52. Justus Robertson, Athanasios Vasileios Kokkinakis, Jonathan Hook, Ben Kirman, Florian Block, Marian F Ursu, Sagarika Patra, Simon Demediuk, Anders Drachen, and Oluseyi Olarewaju. 2021. Wait, But Why?: Assessing Behavior Explanation Strategies for Real-Time Strategy Games. In 26th International Conference on Intelligent User Interfaces. 32–42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Julia Schwarz and Meredith Morris. 2011. Augmenting web pages and search results to support credibility assessment. In Proceedings of the SIGCHI conference on human factors in computing systems. 1245–1254.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Ben Shneiderman. 2020. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy Human-Centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4(2020), 1–31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal 31, 2 (2018), 47–53.Google ScholarGoogle Scholar
  56. Elizabeth Sillence, Pam Briggs, Lesley Fishwick, and Peter Harris. 2004. Trust and mistrust of online health sites. In Proceedings of the SIGCHI conference on Human factors in computing systems. 663–670.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. S Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative.Google ScholarGoogle Scholar
  58. S Shyam Sundar and Jinyoung Kim. 2019. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Harini Suresh, Natalie Lao, and Ilaria Liccardi. 2020. Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making. In 12th ACM Conference on Web Science. 315–324.Google ScholarGoogle Scholar
  60. Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109–119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Lauren Thornton, Bran Knowles, and Gordon Blair. 2021. Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 64–76.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 272–283.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Shawn Tseng and BJ Fogg. 1999. Credibility and computing technology. Commun. ACM 42, 5 (1999), 39–44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Kush R Varshney. 2019. Trustworthy machine learning and artificial intelligence. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (2019), 26–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–39.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces. 318–328.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. C Nadine Wathen and Jacquelyn Burkell. 2002. Believe it or not: Factors influencing credibility on the Web. Journal of the American society for information science and technology 53, 2 (2002), 134–144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Yusuke Yamamoto and Katsumi Tanaka. 2011. Enhancing credibility judgment of web search results. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1235–1244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Amotz Zahavi. 1975. Mate selection—a selection for a handicap. Journal of theoretical Biology 53, 1 (1975), 205–214.Google ScholarGoogle ScholarCross RefCross Ref
  70. Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Designing for Responsible Trust in AI Systems: A Communication Perspective
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
            June 2022
            2351 pages
            ISBN:9781450393522
            DOI:10.1145/3531146

            Copyright © 2022 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 20 June 2022

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format