The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI

https://doi.org/10.1016/j.ijhcs.2020.102551Get rights and content

Highlights

  • This study examines the effect of explainability in AI on user trust and attitudes toward AI.

  • Conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust.

  • The dual roles of causability and explainability in terms of its underlying links to trust.

  • Causability lends the justification for what and how should be explained.

  • Causable explainable AI will help people understand the decision-making process of AI algorithms.

Abstract

Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.

Introduction

The role of algorithms in our lives is growing rapidly, from simply recommending online content or online search results, to more critical usages, like diagnosing human cancer risk in medical fields (Chazette and Schneider, 2020). Algorithms are widely used for collection, computing, data processing, and automated decision-making. By widely mediating and assisting in human decision-making, algorithms are becoming a ubiquitous part of human lives (Rai, 2020). While algorithms can offer highly personalized and relevant services and content, the effectiveness of artificial intelligence (AI) systems is limited by the algorithm's current inability to explain their decisions and operations to users. Complicated matters, such as fairness, accountability, transparency, and explainability (FATE) are inextricably linked to algorithmic phenomena (Ferrario, Loi, and Viganò, 2020; Shin, Zhong, and Biocca, 2020). Questions regarding how to safeguard the goals, services, and underlying processes of AI, who should be held liable for the consequences of AI, and whether AI is doing things that humans believe are ethical remain unclear and controversial (Dörr and Hollnbuchner, 2017). Thesesubjects, including FATE and ethical concerns regarding how we address and govern issues will be critical to AI development and innovation (Crain, 2018).

The black-box nature of algorithm processes had led to calls for research on explainability in AI (Castelvecchi, 2016; Holzinger, 2016), for example, explore the effects of explainability and transparency in the adoption of personalized news recommendations. Shin (2020) proposes an idea of algorithmic trust in terms of transparency in the content recommendation context. How users interpret algorithmic features and how users understand algorithm-based systems will be important questions to address as AI becomes more widespread (Shin, 2020). Particularly, this topic will be even more critical in news recommendation systems, where fairness, accountability, and credibility are inherent journalistic values (Dörr and Hollnbuchner, 2017). There has been increasing pressure to give the right explanation on how and why a result was provided (Hoeve et al., 2017). Despite their importance, few studies have examined the roles of explainability and interpretability in AI. Recent research on algorithm acceptance (Shin et al., 2020) suggests the heuristic role of explainability in the acceptance of algorithm/AI services. When users interact with an algorithm, they inevitably encounter issues of algorithm functions, which are essentially subjective insofar as they are dependent upon human judgment and context (Shin and Park, 2019). Thus, along with explainability, it is important to examine how users interpret such explanations, how they reason causality and causal inference (Arrieta, 2020), and the process through which people work to understand the issues in algorithms that are ambiguous and uncertain (Vallverdú, 2020). Against the increasing concerns about the opacity of black-box AI, this study operationalizes trust in algorithms by clarifying the role of explainability in reference to causability. It examines FATE in the context of algorithm processing and to clarify its roles and influence in the user interaction with AI. The following research questions (RQ) are formulated based on the research gaps:

  • RQ1: How does explainability play out in user heuristics and systematic evaluations for personalized and customized AI news?

  • RQ2:How do users perceive/evaluate the given explanations and how do we measure the quality of explanations?

  • RQ3:How do explainability combined with causability affect trust and the user experience with a personalized recommender system?

Findings reveal a user dual-process that users go through: a heuristic process by causability and a systematic process through explainability evaluating algorithm features and deciding how and whether to continue to use AI services during their evaluations. Whenever people encounter algorithms, they must make decisions as to whether, how, and to what extent to trustalgorithm-based services (Wölker and Powell, 2020). Heuristically, users evaluateexplanations based on their existing knowledge and beliefs, and partly based on their understanding of the algorithms. Users evaluate the quality of explanation based on their own level of interpretability and understandability (Samek, Binder, Montavon, Lapuschkin, and Muller, 2017). Systematically, users explore AI product information when evaluating algorithmic functionality. In the process, issues of FAT play roles as heuristic cues, triggering user trust. Levels and kinds of FAT are perceived as a function of user appraisal of explainability (Moller, Trilling, Helberger, and van Es, 2018). When such explanations are reasonable and understandable, users begin to accept FAT and trust the AI system.

The causal implications of trust and algorithmic explainability provide important directions for academia and practice. Theoretically, clarifying the role of explainability in AI would make meaningful contributions to the ongoing discussion of human-AI interaction (HAII; Sundar, 2020). Particularly, the human-interpretable heuristic processes of explainable AI (XAI) from a human factor's perspective is useful because they provide new ways of designing and developing causable XAI (Combs, Fendley, and Bihl, 2020). The findings contribute to theformalisation of the field of explainability and causability in HAII by showing how the concepts are conceptualized, by illustrating how they can be implemented in user interfaces, and by examining how the effect and the quality of explainability is measured (Samek et al., 2017). From a practical standpoint, the heuristic role of causability and the systematic dimension of explainability in algorithms lends strategic direction on how to design and develop XAI and user-centered algorithms in order to facilitate algorithm adoption in mainstream services. As the current AI models are increasingly criticized for their black-box nature, the roles of explainability and related causability will surely give insights into user confidence in algorithms (Shin, 2020).

Section snippets

XAI: finding correlation and causation

XAI refers to machine learning and AI technologies that can offer human-understandable justifications for their output or procedures (Gunning et al., 2019). Explainability along with transparency are two very important elements related to XAI (Ehsan and Riedl, 2019). While there is no uniformly accepted definition of explainability in AI, it can be conceptualized as the ability to explain the way in which an algorithm works in order to understand how and why it has delivered particular outcomes

Hypotheses proposition: causability and explainability in AI

The proposed model includes users’ cognitive and emotional responses to causability-explainability in AIFig. 1. Causability is proposed as a predecessorof explainability, which is posited as antecedents of FAT.

Data collection and sample

This study recruited a total of 350 individuals through online (Qualtrics) and offline (local universities) in exchange for monetary compensation and class credits. The data were merged and analyzed using SPSS AMOS. The sample was confined to respondents who had prior experience with algorithm services (automatic recommendation, content suggestions, online news aggregation, etc.). To warrant the reliability and the validity of responses, a series of validation confirm questions were added into

Structural model testing

Structural path testing revealed that the relations drawn in the hypotheses were largely supported (Fig. 3 and Table 3). All the path coefficients were statistically significant(p<.001 or p<.05). Trust is significantly influenced by FAT, which is determined by causability and explainability. These factors altogether account for 58.0% of trust variance (R2=0.581). Performance expectancy values are greatly influenced by the trust. The model explained a significant portion of the variance in each

Discussion: bridging the gap of explain ability and human cognition

The model illustrates that interacting with algorithms engages a series of intersecting cognitive processes, wherein features of algorithms are used to formulate a heuristic for user motivation and to trigger user action when using AI services. The findings of this study offer interesting insights into the links between causability and explainability, and further the dynamics of heuristics, quality, and trust in algorithms. The findings of this study lay forth an argument that human-centered AI

Implications: how to overcome the black-box pit fall of AI

The impacts of this study are twofold, managerial and theoretical. Practically, the findings of the study have design implications regarding what AI practitioners should do to support effective HAII, specifically, how to implement effective explainability in the AI interface. Theoretically, this study confirmed the heuristic-systematic process together with the liaison role of user trust in AI (Ferrario et al., 2020). It is implied that algorithms should be designed with principles that AIs are

Conclusion and future studies: beyond explainable AI

AI will be developed to offer truly personalized, algorithm-supported news that is based on the user's past behavior and expressed interests (Shin, 2019). However, the AI industry should do this in a way that observes the FAT principles and respects the users’ right to explanations. This implies that AI and future algorithms must look beyond superficial fairness and legality, or perfunctory accuracyand fulfill genuine user needs and requirements. Modeling the algorithm experience would be

Declaration of Competing Interest

We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us.

Acknowledgement

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF- 2017S1A5A2A02067973). Dr. Shin appreciates the generous support from the NSF Excellent Paper Support Program (2017-2018).

Dr. Shin has been a Professor at the College of Communication and Media Sciences at Zayed University, Abu Dhabi Campus since 2019. Over the last 19 years, he taught various universities in the US and Korea, including Penn State University. Prior to Zayed University, he was a Professor at Sungkyunkwan University, Seoul, Korea. He also was founding Chair of the Department of Interaction Science, an interdisciplinary research initiative sponsored by Ministry of Education and Samsung Foundation. As

References (49)

  • M. Crain

    The limits of transparency: data brokers and commodification

    New Media & Society

    (2018)
  • H. Cramer et al.

    The effects of transparency on trust in and acceptance of a content-based art recommender

    User Model User-Adapt Interact

    (2008)
  • S. Chaiken

    Heuristic versus systematic information processing and the use of source versus message cues in persuasion

    J. Pers. Soc. Psychol.

    (1980)
  • S. Chaiken et al.

    A theory of heuristic and systematic information processing

  • Chazette, L., &Schneider, K. (2020). Explainability as a non-functional requirement. Require....
  • S. Chen et al.

    Motivated heuristic and systematic processing

    Psychol. Inq.

    (1999)
  • K. Combs et al.

    A preliminary look at heuristic analysis for assessing artificial intelligence explainability

    WSEAS Trans. Comp. Res.

    (2020)
  • K.N. Dörr et al.

    Ethical challenges of algorithmic journalism

    Digit. Journalism

    (2017)
  • U. Ehsan et al.

    On design and evaluation of human-centered explainable AI systems

    Glasgow’19

    (2019)
  • Ferrario, A., Loi, M., &Viganò, E. (2020). In AI we trust incrementally. Philosophy & Technology. DOI:...
  • B. Goodman et al.

    European Union regulations on algorithmic decision-making and a right to explanation

    AI Mag.

    (2017)
  • D. Gunning et al.

    XAI: explainable artificial intelligence

    Sci. Rob.

    (2019)
  • J. Hair et al.

    A primer on partial least squares structural equation modeling

    (2013)
  • A.F. Hayes

    Introduction to mediation, moderation, and conditional process analysis

    (2013)
  • Cited by (467)

    • AI for science: Predicting infectious diseases

      2024, Journal of Safety Science and Resilience
    View all citing articles on Scopus

    Dr. Shin has been a Professor at the College of Communication and Media Sciences at Zayed University, Abu Dhabi Campus since 2019. Over the last 19 years, he taught various universities in the US and Korea, including Penn State University. Prior to Zayed University, he was a Professor at Sungkyunkwan University, Seoul, Korea. He also was founding Chair of the Department of Interaction Science, an interdisciplinary research initiative sponsored by Ministry of Education and Samsung Foundation. As a Head and Director of Interaction Science Research Center, he also served as a Principal Investigator of BK21 Plus, a national research project hosted by the Ministry of Education in Korea. Don received his bachelor's degree from Sungkyunkwan University (1997), his master's degree from Southern Illinois University (1998), and another master and PhD degrees from Syracuse University (2004).

    View full text