1 Introduction
Recent developments in AI are expected to increase productivity and efficiency in domain experts’ work by transferring repetitive tasks from humans to algorithms, supporting experts’ decision-making processes, and contributing to increased accuracy and enhanced performance (Brynjolfsson and McAfee
2011,
2017; Jarrahi
2018). In this context, expert work refers to, e.g., non-routine tasks, applying cognitive skills and abstract knowledge held by an individual in an autonomous or authoritarian position in a specific domain (Pakarinen and Huising
2023). Experts, compared to laypeople, are understood to demonstrate superior performance whose cumulative experience makes decision-making less effortful and more intuitive (Hoffman
1998; Ericsson
2014). While novel AI-based applications are increasingly developed to support expert work, trust in technology has been considered a significant antecedent supporting the use and acceptance of novel technology (McKnight et al.
2011; de Visser et al.
2018; Siau and Wang
2018; Gefen et al.
2003; Pavlou and Gefen
2004). In Computer-Supported Cooperative Work (CSCW) and Human–Computer Interaction (HCI) research, trust in technology is typically defined as the trustor’s “attitude or willingness to accept vulnerability and uncertainty in a situation that aims to achieve a certain goal, performed by the trustee without an ability to monitor or control this party” (Lee and See
2004; Mayer et al.
1995; Vereschak et al.
2021).
Previous research on trust in technology has extensively explored trust in different contexts, such as online trust or e-trust (Corritore et al.
2001,
2003; Pennanen et al.
2007), trust in e-commerce (Pavlou
2003; McKnight et al.
2002), trust in recommendation agents (Benbasat and Wang
2005, Wang and Benbasat
2008; Komiak and Benbasat
2006) and trust in global virtual teams (Jarvenpaa et al.
1998). However, trust is dynamic and ever-changing, which motivates the need to reconsider trust in novel AI-based applications, including machine learning, especially in the context of expert work. We justify this with the following arguments. First, novel AI-based applications are learning systems that develop through interaction after being deployed into the target context. This might result in unpredictable outcomes and opaqueness when compared to the system’s initial and intended functionality, which can undermine trust. Second, machine-learning models might not be explainable or understandable to their users (or even developers). This has raised discussion on trustworthy AI, referring to responsible, reliable, and ethical AI advancement, which is expected to necessitate the involvement of people using AI or affected by it (Jacovi et al.
2021; Zicari et al.
2021; Ashoori and Weisz
2019). So far, this remains scarce: Recent research on AI design and development shows that the end- users’ perspective is mainly excluded from the processes (Hartikainen et al.
2022). In addition, domain expert users’ perspective on trust in technology (and AI) has remained undefined in the field of CSCW (Vereschak et al.
2021), and when situated in a real-life context (Lockey et al.
2021). We approach the topic from a socio-technical perspective in which implementing a system in CSCW includes both technical, organizational, and social aspects, such as employee needs and respect for local practices (Bullinger-Hoffman et al.
2021; Mumford
2000). This motivates our third argument which maybe the most important in the context of CSCW: well-known challenges associated with technology design and development in a work-life context are related to the lack of mutual understanding between industry practitioners and domain experts. For instance, articulation of work aims to enhance awareness by creating visibility to the work practices and the task at hand (Suchman
1995; Schmidt and Bannon
1992). Understanding domain expert users’ perceptions of trust in intelligent technology in their work increases the probability of developing appropriate, usable, and acceptable AI applications for human-AI collaboration in expert work.
In this article, we explore accounting practitioners’ (henceforth,
accountants) trust in intelligent automation (IA) in their work. Accounting as an AI-application context offers an interesting area to explore the integration of AI-based technologies and modern knowledge work: accounting has been considered a particularly amenable domain for AI automation with repetitive and routine tasks and precisely defined data processing (Frey and Osborne
2017; Leitner-Hanetseder et al.
2021). Through semi-structured interviews (
N = 9), we collected data about accountants’ experiences and perceptions of trust toward a machine learning (ML) system designed to automate invoice processing. To explore domain experts’ willingness to trust IA, we ask:
What aspects contribute to the accountant practitioners’ trust in intelligent automation? In the target context, the accountants’ work was monitored by an intelligent system that collects data about their invoice processing and, when reaching a certain level of confidence, makes suggestions for invoice automation. At the time of the study, most participants were gradually moving towards IA, but only one of them had actualized the system’s full automatization benefits. The aim of the system is full automation, which processes invoices without human intervention. As an outcome, certain parts of the accountants’ work are transferred to a machine-learning system, resulting in cost-efficiency and enhanced work practices. The studied system and its target context are elaborated in Section
3.
This study contributes to CSCW research by providing empirical insights into trust in IA in knowledge experts’ work, specifically within the context of accounting. Exploring the impact of novel AI-based applications on expert work, we can reassess domain experts’ trust in technology and how it should be approached given the inherent opaqueness and uncertainty of machine learning systems. By focusing on a specific user group and context, this study sheds light on the dynamic phenomenon of trust in IA, enhances our understanding of the collaborative partnership between humans and AI-embedded systems, and increases our understanding of appropriate, trustworthy, and acceptable AI design and development.
2.1 Trust in technology
Concerning research disciplines, our research builds on Computer-Supported Cooperative Work, Human–Computer Interaction, and Information Science. Previous research on trust in technology has adapted a trust definition from organization and management science, which describes trust as the “willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that party” (Mayer et al.
1995). Trust can also be perceived specifically from the perspective of automation, in which trust is defined as an “attitude, that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability” (Lee and See
2004). Recent CSCW research exploring human-AI trust uses both definitions (Vereschak et al.
2021). Both descriptions have similar connotations: trust concerns a trustor’s (entity that trusts) willingness for vulnerability, a certain attitude towards the trustee (entity that is trusted), and positive expectations regarding the likelihood of a favorable outcome. It is noteworthy that trust can be confused with related concepts, such as confidence (if vulnerability does not exist), reliance (the decision to follow someone’s recommendation), and compliance (the decision to ask for a recommendation) (Vereschak et al.
2021). Therefore, it is important to understand the general aspects of trust, although novel AI-based applications might complement this existing understanding.
Previous research acknowledges certain characteristics that influence trust. In interpersonal trust, such characteristics are ability (competence), benevolence (intents and motivations), and integrity (acceptance) (Mayer et al.
1995), whereas their technical correspondence would be functionality, reliability, and helpfulness (Lankton and McKnight
2011; Muir and Moray
1996).
Functionality is the degree to which the trustor anticipates the technology will have the functions or features needed to accomplish one’s tasks.
Reliability refers to an expectation that the technology will continually operate properly or will operate in a consistent, flawless manner, providing adequate and responsive help (
helpfulness). Trust in automation, on the other hand, emphasizes
performance, process, and purpose as general bases of trust. These factors inform the trustor about, e.g., what the automation does (performance), how the automation operates (process), and why the automation was developed (purpose) (Lee and Moray
1992; Lee and See
2004; Madhavan and Wiegmann
2007). It is noteworthy that trust does not always translate into behavior: even though trust exists, the trustor might not be willing to act on it (Vereschak et al.
2021). In our study, we are interested in the domain expert users’ experiences and perceptions of trust and the aspects that influence their willingness to trust in IA. Previous research on trust in technology has not thoroughly explored domain experts’ perspectives, but we would expect, e.g., reliability, process, and purpose to play a role in (domain experts’) trust in AI, as these can be considered relatively essential in any technology use.
Despite the dynamic nature of trust, previous research on trust in automation considers interesting concepts that can be well-suited also for trust in novel AI-based applications. For instance,
appropriate trust might be useful when exploring trust in AI in expert work, referring to a situation in which the user’s trust and the technology’s capabilities are aligned (calibrated), and the user knows when to trust the technology, avoiding over-trusting and distrusting (Parasuraman and Riley
1997; Lee and See
2004). Recent research on trust in AI acknowledges that appropriate trust allows the users to apply their knowledge and improve the outcomes in situations where AI models may have limitations (Zhang et al.
2020a,
b). A similar concept is
situational trust, which emphasizes the context-dependent nature of trust, such as workload and risk regarding the trust decision, dividing trust antecedents into three different groups: the trustor, the trustee, and the context/environment. Situational trust underlines the holistic perspective on trust in technology (Hoff and Bashir
2015). This is especially important when considering the uncertainty of AI-based applications: machine-learning systems might produce different outcomes for different users, underlining the need for end-user control over these technologies. Both appropriate trust and situational trust are closely related to the CSCW concept of appropriation/tailorability which refers to individuals’ adaptation to technology in their situation—technology is expected to support local practices (e.g., Orlikowski
1992,
1995; Dourish
2003). The gap between social requirements and technical feasibility is often argued to be the reason why IT systems are not supporting real work efficiently: representation of work, if created in the interest of introducing new information technologies, is unlikely to include aspects of the work considered beyond the reach of those technologies (Ackerman
2000; Suchman
1995). Such awareness is not possible to achieve without understanding the contextual and situational variance (Dourish and Bellotti
1992).
2.2 Trust in AI
Current research on trust in AI can be approached from the perspectives of emotional trust or cognitive trust, in which the former focuses on emotions/affects and the latter on calculated and rational trust decisions (Schoorman et al.
2007; Glikson and Woolley
2020). Cognitive trust is emphasized in initial trust (Gefen et al.
2003), and it may be based on second-hand knowledge, for instance, reputation, and guides the trustor’s evaluation of the trustee’s trustworthiness (e.g., McKnight et al.
1998). In this study, we focus on cognitive trust due to the context in which technology is embedded (expert work). Exploring cognitive trust in this context is interesting because domain experts are typically in a position in which they are expected to make rational decisions. In addition, complex cognitive tasks are also subject to change due to the recent development of AI (Saßmannshausen et al.
2021).
When researchers examine cognitive trust in AI, they measure it as a function of whether users are willing to take information or advice and act on it, as well as whether they see the technology as helpful, competent, or useful (Glikson and Woolley
2020). Especially usefulness can be considered important in the context of expert work because tasks are typically conducted in fast-paced environments emphasizing the relevance of information and tools. However, it is interesting to what degree the usefulness can be evaluated (and when) because the benefits of AI applications might become visible in delay. When considering cognitive trust, it also reflects the trustworthiness of the human-AI interaction and collaboration (referring to the degree to which AI helps users to make rational decisions and the reliability of this collaboration). Trustworthy AI is a concept that has occurred on the ethical frameworks and guidelines around AI, promoting an idea that individuals, organizations, and societies will only ever be able to achieve the full potential of AI if trust can be established in its development, deployment, and use (HLEG AI
2019).
Glikson and Woolley (
2020) conducted a review of users’ cognitive and emotional trust in AI. According to this study, cognitive trust is based on perceptions of trustee reliance and competence, as well as whether they see the technology as helpful, competent, or useful. The results of this review reveal the important role of AI’s transparency, reliability, task characteristics, and immediacy behaviors in developing cognitive trust. Transparency is the level to which the underlying operating rules and inner logic of the technology are apparent to the users. Reliability means exhibiting the same and expected behavior over time. In the case of AI, reliability is often difficult to assess, especially in the context of high machine intelligence, as learning from data can lead technology to exhibit different behavior, even if the underlying objective function remains the same. Task characteristics refer to the work the technology is performing, such as whether it deals with largely technical or interpersonal judgments. Immediacy behaviors refer to personalization, interactive or socially oriented abilities, or gestures intended to increase interpersonal closeness, such as proactivity and responsiveness. Glikson and Woolley (
2020) underline that there is a growing need for research in real-life settings, such as organizations that are already using AI in their management or decision-making systems, and more field studies on the topic are needed.
Van der Werff et al. (
2021) consider trust in AI to include micro and macro-level trust cues that provide information that influences trust decisions. Micro-level cues arise from information about the trustee or about the trustor, occurring from interpersonal interaction and evaluation, for instance, through experiences of interacting with technology. Macro-level cues occur in a wider contextual environment in which the AI is embedded, encompassing information that arises from, e.g., service providers, complementary technologies, and regulatory standards. This refers to a situation in which information about organizations, systems, or technology can shape trusting attitudes, decisions, and behavior regarding another trust referent (Bachmann
2001; Kosonen et al.
2008; Shapiro
1987). According to this study, both macro and micro-level cues affect users’ trust in AI. At the macro level, the influencing trust cues are, for instance, organizational ability to build AI services, benevolence in creating services for customer needs, and integrity regarding data privacy. Also, macro or organizational-level integrity was signaled through open communication, privacy protection, and fair information practices. On a micro-level, the trust cues arise from interaction with the technology, emphasizing the importance of ability, integrity, and propensity to trust. They also found that context-relevant knowledge and information asymmetry might play an important role in the development of trust propensity. Van der Werff et al. (
2021) argue that AI cannot operate within a vacuum and should be studied in the context of trust, considering the trust-related cues that originate at more macro levels.
Vereschak et al. (
2021) conducted a review of the methods to empirically investigate trust in AI-assisted decision-making. Their findings provide a practical perspective on studying human-AI trust in the field of CSCW and HCI both from quantitative and qualitative approaches. They explore, for instance, elements related to the participants’ experience and expertise. The study findings show that previous research does not typically involve participants with prior experience with either the AI-embedded system or the task associated with it, although it is suggested. In addition, they discuss different aspects when elaborating on the task and how the task is integrated into the experiment. For instance, immediate feedback allows participants to dynamically update their level of trust. In most cases, the feedback is related to the participants’ performance instead of system performance. They also underline a need to control participants’ expectations, focusing on positive aspects. This could be done, for example, by providing instructions, error-free initial experience, and guaranteeing a minimum level of accuracy in system behavior (min. 60–70% accuracy depending on the context). They also found that most of the research included in the review evaluated trust as one of the multiple factors of users’ experience instead of solely focusing on human-AI trust. It is noteworthy that most of their observations are based on studies conducted in laboratory settings, underlining a need for human-AI trust in real-life settings.
2.3 Domain expert users’ trust in AI
While previous research provides interesting insight into trust in AI, there is a lack of studies that explore domain experts’ perspectives regarding trust in real-life settings and during technology use. Few studies have made attempts to study domain experts’ trust in AI with a qualitative approach. For instance, Bedué and Fritzsche (
2022) conducted an interview study (
N = 12) with company decision-makers (different domains) to explore their trust in AI, underlining the importance of trust in technology adoption. Focusing on the early stages of technology development, they found access to knowledge, transparency, explainability, certifications, and self-imposed standards and guidelines to be the main determinants of trust in AI. In addition to trust, these elements were found to reduce uncertainties and enhance AI adoption intentions. Access to knowledge refers to new skills and more technical skill sets required when using AI. They argue that currently, this information is held by information technology providers, and, e.g., institutions lack knowledge on AI evaluation and development. Transparency and explainability cover both the internal functioning of a model as well as interpretability, e.g., verbal explanations or visualizations. Certifications are considered a signal of the company’s integrity. For instance, self-imposed standards are perceived as crucial to fair and reasonable AI regulation. Bedué and Fritzsche (
2022) emphasize that trust in AI needs to be understood in a wider context, underlining the collaboration of multiple institutes and alignment of AI with social norms and compliance with policies and standards.
Saßmannshausen et al. (
2021) explored the antecedent variables on trust in AI within production management from three facets: AI characteristics, trustors’ (domain experts) characteristics, and decision situation characteristics, underlining trust as essential to successful human-AI cooperation. A qualitative study was conducted with four expert interviews for hypothesis development. According to their findings, users’ digital affinity or interest and pleasure in learning and using digital technologies was considered an important antecedent to increase experts’ trust in AI. Exploring the AI characteristics, perceived ability (e.g., AI’s ability or competence on task), and perceived comprehensibility (e.g., quality and plausibility of AI explanation) was found to positively affect trust. Considering the decision situation characteristics, the predictability and the error costs were found to indirectly increase trust in AI through perceived ability and/or perceived comprehensibility. Error costs refer to potential impact, consequences, and costs caused by wrong AI decisions, affecting the learned trust and helping users to understand the situation and AI capabilities. Based on their findings, they provide design guidelines for socially sustainable human-AI interaction in production management, such as designing explainable AI. For trust calibration, they suggest verifying the AI decision by another system or a human being. They also recommend introducing AI to digitally competent employees and even state that this should become a criterion for recruiting. Expert status as a trustor characteristic was not found to contribute to trust in their research.
Lockey et al. (
2021) conducted a review of the antecedents of trust in AI, exploring vulnerabilities of key stakeholder groups in relation to AI systems, including domain experts, end-users, and societal perspectives. They argue that societal adoption of AI is recognized to depend on stakeholders’ trust in AI, and understanding the trustor (e.g., domain expert) is particularly important in the context of AI as it influences the nature of the risks and vulnerabilities inherent in trusting an AI system. According to the findings, the key vulnerabilities faced by domain experts relate to professional knowledge, skills, identity, and reputation, for instance, loss of expert oversight or professional over-reliance. Transparency and explainability are considered as a trust challenge, emphasizing the experts’ ability to understand, explain, and justify AI decisions to other stakeholders. Domain experts need to remain accountable, e.g., for the accuracy and fairness of AI output and privacy, and thus, they are expected to be able to provide human oversight in the use of AI. The review also introduces reputational and legal risks, such as biased results or inappropriate data usage, influencing domain experts’ trust in AI. They call for further research to understand what influences stakeholders or well-calibrated trust in AI systems because high trust might not always be appropriate. For instance, AI explanations might cause users to misplace trust in inaccurate AI outcomes, underlining a need for correct evaluation of a system’s trustworthiness.
To this end, we can perceive trust to be important in user’s AI adoption, human-AI cooperation, and societal AI adoption. Understanding the technology and the possibility of evaluating its trustworthiness, such as reliability and predictability (e.g., through feedback), is considered important for trust. When focusing on domain experts as technology users, trusting AI might require changes in their skillsets, e.g., regarding users’ technical skills. In addition, contextual aspects, such as a given task or situation, can shape users' perceptions regarding trust. Trust in AI can also be influenced by certifications or standards or acknowledging potential negative consequences, e.g., reputational risks in error situations. To this end, it seems that trust in AI is holistic and sociotechnical by nature, expanding beyond human-AI interaction: although trust can be supported with technical means (e.g., explanations), social context also contributes to trust.
3 Context: studied professional and technological domain
The professionals in this study are accountants, whose main tasks include, e.g., their clients’ financial administration, financial analysis and planning, tax preparations, and invoice processing. Accountants in our study work monthly, providing an overview of the financial activities at the end of each month. The accounting domain was selected for this study because it represents a knowledge-intensive work expected to be largely affected by novel AI technologies (Frey and Osborne
2017; Leitner-Hanetseder et al.
2021). In addition, accountant professionals themselves acknowledge that their profession undergoes significant disruptions, and to remain in their profession, they feel the need to adapt to the changes, new roles, and tasks emerging from human-AI collaboration (Asatiani et al.
2020; Leitner-Hanetseder et al.
2021).
The machine learning (ML) system we focused on for this study was developed by an external vendor to process invoices automatically. The aim is to automate mechanical and routine tasks in accounting, such as value-added tax (VAT) processing, cost allocations and accrued expenses. The system is tailored to support existing organizational processes and practices, and automated invoice processing can be integrated into an organization’s existing system or deployed in a separate interface that is developed around the system. The automation process is a three-fold process, starting from (1) automated learning. The system tracks the accountant’s work and learns on a customer-specific basis how invoice processing is done, automatically creating client/company-specific rules. This is followed by (2) quality assurance. The system starts processing invoices, and the accountant monitors and evaluates the system outcomes. If the accountant corrects or makes changes to the invoices, the company-specific rules are modified based on these changes. The third step of the process is (3) intelligent automation. Those invoices that do not need any input from the accountant will be processed fully automatically without human intervention if the accountant permits this. Table
1 presents the automation levels according to the study participants.
Table 1
Overview of the study participants and their relations with the studied system. We label the participants according to their system use based on three categories: (a) indicates the system use of less than six months, (b) indicates system use of less than two years, and (c) indicates system use of more than two years
P1a | 4 months | 4 | Level 1 (automated learning) |
P2b | 1 year | 10 | Level 2 (quality assurance) |
P3c | 2 years | 6 | Level 2 (quality assurance) |
P4a | 6 months | 1 | Level 1 (automated learning) |
P5b | 1 year | 4 | Level 2 (quality assurance) |
P6c | 3 years | 10 | Level 2 (quality assurance) |
P7b | 1 year | 14 | Level 3 (intelligent automation) |
P8c | 2 years | 20 | Level 2 (quality assurance) |
P9c | 2 years | 1 | Level 3 (intelligent automation) |
The system deployment process is iterative. In most cases, the ML system is introduced to the accountants after the decision on the deployment has already been agreed upon with client company managers. There are multiple meetings with system developers and accountants: in the first meeting, the developers introduce the system and its practical functionalities, for instance, how IA works and what kind of clients are suitable (or not) to automate with the IA. A second meeting is organized after the accountants have used the system for ca. two months. This meeting focuses on fine adjustments in which developers aim to encourage accountants to use IA and help them overcome possible challenges or issues during the use. This is followed by regular monitoring of the use and a push for increasing automation. It is noteworthy that the deployment process emphasizes participatory/collaborative design practices in which the end-users are actively included in the design and development of the product (e.g., Auernhammer
2020).
6 Discussion
This study investigated accountants’ trust in IA in their work, focusing on invoice processing. The study findings highlight the situational and context-dependent nature of trust in IA. For instance, a collaborative design process and open communication with the developers is considered to influence trust in IA. Most interestingly, the findings acknowledge domain experts’ specific trustor characteristics as AI end-users: the studied IA system develops and personalizes through the system use, and the accountants play a vital role in training the system by transferring their expert knowledge to the system, enabling the system to identify repetitive patterns, and suggesting automation possibilities. By leveraging their expert knowledge, accountants evaluate the system outcomes, influencing the iterative development and customization of IA. Next, we discuss the study findings considering prior research, starting from the social aspects, and continuing to human-AI interaction. Lastly, we cover the specific characteristics of domain experts as technology users.
6.1 Interpersonal and social trust shaping trust in intelligent automation
The concept of appropriate trust underlines the situational and context-dependent nature of trust, further emphasizing that trust in IA is not solely focused on technology and its characteristics but the whole sociotechnical system. This aligns with CSCW research in which the social and the technical elements should be designed in parallel because they influence each other (Bullinger-Hoffman et al.
2021). Our study findings underline the importance of a collaborative and iterative design process in relation to trust: domain experts are actively involved in the design and development of the product to ensure that the target activities and social context are properly understood by the system developers. They also feel respected in what they do and have a feeling of having a voice in the development and deployment process. Such articulation of the work is necessary to enhance understanding and awareness between industry practitioners and domain experts because it creates visibility to the work practices and the task at hand (Suchman
1995; Schmidt and Bannon
1992). This resulted in a more usable and useful end-product and a receptive and positive attitude towards the system – thus supporting trust in the system’s IA features.
Attitude is considered an important aspect of trust in automation (Lee and See
2004), which is supported by the present study. Although being an individual trait, it can be supported socially: information occurring in a wider social environment, e.g., in indirect social relationships, can shape trusting attitudes and behavior. This can include, e.g., interaction with service providers and the use of complementary technologies (Bachmann
2001; Kosonen et al.
2008; Shapiro
1987; van der Werff et al.
2021.) In addition, the study participants expressed positive sentiments towards the developers and their willingness to listen to the users’ and their suggestions, indicating developers’ benevolence and ability to create helpful products. This underlines the importance of social trust and interpersonal trust (Mayer et al.
1995) and can also be acknowledged in social practices inside an organization, with more active and receptive users supporting their colleagues to use and trial the system. This relatively strong social trust between the actors offered a fruitful starting point for the experts trusting even a new type of technology with a high level of automation.
6.2 Aspects of trust in human-AI interaction
Research on trust in technology underlines functionality, reliability, and helpfulness as the main factors in trust in technology (Lankton and McKnight
2011; Muir and Moray
1996). These characteristics are considered to enable users to trust the system, which is understandable—without them, the technology would be useless. This can be considered obvious regarding novel technologies, including AI-based applications. For instance, Bingley et al. (
2023) explored user beliefs about AI, and functionality was the most mentioned theme for laypeople. Functionality and reliability were also present in our study findings and are clearly essential to accountants’ trust in IA.
What is interesting about our study findings is the emphasis on domain experts’ personal experience with reliability. The accountants underlined a need to evaluate, monitor, and double-check the system operations to have first-hand experience of its reliability. This helps the users assess the perceived ability of the system to conduct a given task, which is considered to influence trust in AI (Saßmannshausen et al.
2021). Such examination can also serve the users’ need for AI transparency and explainability (Glikson and Woolley
2020; Bedué and Fritzsche
2022; Lockey et al.
2021), helping them to understand the situations in which ML systems can be considered trustworthy. Our study findings underline the relevance of assessing the appropriate use of IA for a given situation. This is especially important regarding the autonomous and adaptive nature of novel AI-based applications: if rule-based systems can be expected to operate in a predictable manner, this might not apply to continuously updating machine-learning systems.
According to Hoff and Bashir (
2015), trust in automation is situational and context-dependent, and the decision to trust becomes visible per the situation or task at hand. In our study, this situatedness is constituted in the varied clientele and the accountant’s knowledge of their client’s operational environment. It is noteworthy that trust in IA comes with hesitation: trust in the system should be calibrated appropriately (Parasuraman and Riley
1997; Lee and See
2004), and the study participants do not seem comfortable with relying on the system too much. Appropriate trust refers to a situation in which the user’s trust and the technology’s capabilities are aligned, and the user knows when to trust the technology, avoiding over-trusting and distrusting. To gain appropriate trust in AI, users should be able to utilize their knowledge to improve the outcomes in situations where AI models may have limitations (Zhang et al.
2020a,
b). This was also visible in our findings: expert knowledge guided the trust calibration, helping the users to decide when to (or not) trust the systems’ suggestion.
6.3 Trustor characteristics: domain experts as AI users
In addition to technological characteristics in trust, our findings reveal interesting trustor (user) characteristics. For instance, certain individual traits, such as users’ propensity or tendency to trust (e.g., Mayer et al.
1995; van der Werff et al.
2021; Saßmannshausen et al.
2021), influence trust in technology, including AI. Our findings support the idea that trust in novel technology is a cumulative process built on previous experiences. However, while
digital affinity (i.e., interest and pleasure in learning and using digital technologies) has been found to influence domain experts’ trust in AI (Saßmannshausen et al.
2021), this is probably a common characteristic for all techno-savvy users who are more prone to use (and trust) novel technologies than those with a more hesitant attitude. To extend these prior findings, our study reveals a few additional aspects of trust in IA.
First, the importance of expert knowledge is emphasized in our study. The use of IA intertwines with general accounting knowledge and an understanding of their client’s specific needs, intricacies, and variations. To be able to appropriately monitor the system and permit higher levels of automation, they need to have first-hand knowledge of their clients' operations. This is very case-specific and refers to tailored use of novel AI applications: within the same organization, the use of IA might vary a lot. This also explains a need for personal reassurance regarding the system functionality and becomes visible also in participants’ comparison between IA and a colleague: whereas a colleague might have more knowledge of accounting, they lack knowledge of a specific clientele. This is an interesting observation, showing that the notion of expertise is not related only to explicit knowledge but also tacit knowledge (Rinta-Kahila et al.
2018). Whereas previous research suggests that expert status (e.g., competence, skills, and experience) is not significant in trust in AI (Saßmannshausen et al.
2021), our study clearly underlines its relevance: the use of IA is based on domain expertise. This can probably be explained by the differences in technical characteristics of these studies: we chose to study trust in relation to a certain AI-based technology in a specific professional context instead of people’s “general” perceptions of AI. Second, domain experts' role in their organization and in relation to their clients influences their trust in technology. Because the use of IA is client-specific, they are accountable for the system use and its possible consequences. This observation is aligned with previous research, in which the domain experts need to have the ability to understand, explain, and justify AI decisions to other stakeholders (Lockey et al.
2021). Work systems are considered to consist of people, technology, organizations, and the primary tasks (e.g., Bullinger-Hoffmann et al.
2021), underlining a holistic perspective when developing AI systems for knowledge work. Also, the perception of AI systems’ trustworthiness covers work systems comprising both AI and humans and their interaction and collaboration in certain contexts and tasks. Therefore, the question of trust in AI might not focus on
whether AI is trustworthy but on
how AI could be used in a trustworthy manner, which refers to both technical and trustor characteristics and the design and development of such systems.
6.4 Limitations and future research
While this study provides insights into domain experts’ trust in intelligent automation, it is important to acknowledge certain limitations. The sample size used in this research is relatively small, which means that additional aspects of trust and subjective aspects may be found in further studies. However, this sample size is quite usual for qualitative interview studies. Also, the participation in the research interviews was voluntary, and hence, the interviewees represent people who have an initial interest in technology, therefore possibly having a propensity to trust novel technology solutions. Lastly, the study did not delve into the long-term effects or potential consequences of trust or mistrust in these systems. By addressing these limitations in future research, we can further advance our understanding of trust in intelligent automation and contribute to the development of trustworthy AI systems in various domains.
It is important to recognize that focusing on a particular domain, such as accountancy, presents unique challenges regarding the generalizability and transferability of findings to other domains. For example, our research indicates that accuracy and double-checking hold significant importance in the work of accountants, values they are reluctant to compromise with the introduction of novel AI technologies. However, such preferences may not be applicable across all knowledge-intensive fields in which AI is embedded. The main focus should be on understanding the domain-specific nuances and characteristics: a genuine interest in expert work in certain professional contexts and the end-users’preferences regarding their work practices (i.e., what they find valuable and what tasks they are willing to transfer to intelligent systems). Whereas we cannot expect the system developers to tailor each system in accordance with domain-specific nuances, it is essential to respect these nuances, including their respective goals, specific aims, and domain-specific values, when designing AI systems. Domain-specific know-how will always be part of expertise. It is noteworthy that these characteristics might also be invisible (Suchman
1995) and that possible organizational values or goal conflicts should be acknowledged during the design and development phase. This is also important considering the increasing agency of AI end-users in, e.g., system training: as domain experts are expected to transfer their expertise into AI systems, system developers might become more dependent on them. Thus, approaching this specific user group with respect is beneficial regarding the system use and acceptance, as our findings suggest. Furthermore, the concept of trust and its formation between domain experts and their clients may vary fundamentally across different professions. For instance, doctors may inherently enjoy a position of trust built upon their extensive education, whereas professionals in other domains, such as real estate, may need to cultivate trust among their clients actively (Ala-Luopa et al. forthcoming). Novel AI applications can either be designed to support or diminish expertise, and these decisions (and embedded values) have both societal and domain-specific consequences that designers and other stakeholders should be aware of.
In addition to awareness and respect related to domain-specific nuances and design guidelines for AI in expert work, this study also offers other intriguing avenues for future research. We call for more research on domain experts’ trustor characteristics, for instance, those with rejective attitudes on novel technologies. In addition, the perspectives of other stakeholders regarding domain experts’ use of novel AI applications require also further exploration. This can refer to indirect trust relations, for instance, clients’ attitudes towards AI-supported work practices that concern their businesses. In general, more research is needed to explore domain experts’ interactions with specific AI applications, underlining the importance of research situated in real-life contexts, which oftentimes is more nuanced and complicated than we might anticipate.
7 Conclusion
The study underlines the context-specific, situated nature of trust in intelligent automation (IA) and reveals the importance of personal oversight and social aspects, such as collaborative design practices. The system use is based on expert knowledge, including tacit knowledge, which makes both the system use and trust in IA highly individual. Trust in IA in our study manifests at different levels: in human-AI interaction, in interpersonal communication and collaboration with the technology provider, and in acknowledging the domain experts’ trust characteristics. The findings expand previous research on trust in technology by revealing new observations on trust in knowledge work, underlining variance in the sociotechnical phenomenon of trust.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.