Skip to main content
Erschienen in: Marketing Letters 1/2022

Open Access 24.08.2021

The impact of lay beliefs about AI on adoption of algorithmic advice

verfasst von: Benjamin von Walter, Dietmar Kremmel, Bruno Jäger

Erschienen in: Marketing Letters | Ausgabe 1/2022

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

There is little research on how consumers decide whether they want to use algorithmic advice or not. In this research, we show that consumers’ lay beliefs about artificial intelligence (AI) serve as a heuristic cue to evaluate accuracy of algorithmic advice in different professional service domains. Three studies provide robust evidence that consumers who believe that AI is higher than human intelligence are more likely to adopt algorithmic advice. We also demonstrate that lay beliefs about AI only influence adoption of algorithmic advice when a decision task is perceived to be complex.
Hinweise

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s11002-021-09589-1.

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Algorithmic advice refers to the automation of professional advice giving by expert systems interacting with consumers instead of highly trained specialists (Logg et al., 2019; Sampson, 2021). It is most widespread in financial services, in which “robo-advisors” are developing investment strategies without human intervention. However, other professional service industries are also automating advice. Services such as DoNotPay or Leaperr, for example, offer “AI-powered” legal and interior design advice without seeing a lawyer or interior designer.
Previous research on algorithmic advice has led to contradictory results. Several studies suggest that consumers oppose algorithmic advice, a phenomenon described as algorithm aversion. Dietvorst et al. (2015), for example, found that individuals were less likely to choose algorithmic advice over inferior human advice to predict student performance after seeing an algorithm err. In the medical domain, scholars have shown that patients do not trust algorithmic advice (Promberger & Baron, 2006) arguing that patients are afraid that it neglects human uniqueness (Longoni et al., 2019). In a similar vein, Castelo et al. (2019) found that algorithm aversion was higher for intuitive, subjective tasks than for quantifiable, objective tasks. However, a study by Logg et al. (2019) has called algorithm aversion into question. Focusing on different domains such as business forecasts or prediction of romantic attraction, they found that people generally appreciated advice from an algorithm over human advice. Hildebrand and Bergner (2021) showed that people appreciated algorithmic financial advice more strongly if it used a human-like, conversational style. In sum, these contradictory results point to the existence of additional factors which may affect adoption of algorithmic advice.
One such factor may be consumers’ lay beliefs about AI. Although lay beliefs about AI seem highly salient in the marketplace as people more frequently use AI-enabled services (Huang & Rust, 2018), research about such beliefs is scarce. That is, previous studies have provided participants with specific information about the quality of algorithmic advice such as information about its mistakes (e.g., Dietvorst et al., 2015; Longoni et al., 2019). However, in real life, people usually do not receive this kind of information and lack domain expertise to evaluate the accuracy of algorithmic advice. Hence, they may rely on more general cues such as their lay beliefs about AI when deciding whether to use algorithmic advice or not.
In our research, we want to address this gap and argue that consumers have different beliefs about how intelligent AI is in comparison to human intelligence. Specifically, we propose that lay beliefs about AI affect adoption of algorithmic advice because they serve as cues to infer accuracy of advice, especially when perceived complexity of a task is high. In three studies, we provide converging evidence for this prediction. In doing so, we contribute to research on algorithmic advice and point to the importance of considering consumers’ lay beliefs about AI when automating advice services.

2 Conceptual development

Figure 1 shows our conceptual model. At the core of this model is the impact of lay beliefs about AI on adoption of algorithmic advice via expected accuracy of advice. We also propose a moderation effect of perceived complexity.

2.1 Lay beliefs about AI

According to the literature, individuals hold implicit theories about intelligence which differ from explicit theories of intelligence and affect their expectations of oneself and others (Furnham, 2001; Sternberg, 1985). Whereas explicit theories are tested by empirical studies and claim general validity, implicit theories are naïve constructions of lay people and can be viewed as templates of prototypical characteristics of an intelligent person (Sternberg, 1985). Specifically, research has found that people regard someone to be intelligent if she or he possesses analytical abilities such as solving number problems, processing of new information, and logical reasoning, whereas other types of skills (e.g., socio-emotional skills) are less strongly associated with an intelligent person (Furnham, 2001). In addition, extensive research has demonstrated that individuals have stereotypic beliefs about the overall level of intelligence of different groups compared to other groups (Petrides et al., 2004; Truxillo et al., 2012).
One may assume that individuals hold similar beliefs about AI concerning different types of skills (e.g., analytical, socio-emotional skills) and the overall intelligence of AI. Arguably, such beliefs may be dependent on the task AI engages in and on the way AI is implemented (Castelo et al., 2019; Logg et al., 2019). In the following, we focus on lay beliefs about AI in the field of expert systems which advice consumers on making an optimal choice (e.g., recommending an investment portfolio or an optimal combination of products).
As a first step to examine lay beliefs about AI in this context, we conducted a pre-study and asked bank customers to discuss AI.1 The main characteristics associated with AI were applying of algorithmic rules, processing of data, and making calculations (“Artifical intelligence is about algorithms which follow a certain strategy based on historic data. I don’t imagine an artificial human being”, customer, male, 35). However, participants had different views about the overall level of AI compared to human intelligence. That is, while some participants believed AI to be higher than human intelligence due to greater processing power and memory (“I always compare robo-advice with medical diagnoses. With AI you can make much better medical diagnoses. Do you know Watson? A machine such as Watson can store all human illnesses and can precisely analyze them for possible diagnoses”, customer, male, 33), others argued that AI could never be as high as human intelligence because all AI was man-made (“I believe that AI can only be as smart as the intelligence of the people who programmed it”, customer, female, 42). More illustrative quotes can be found in Web Appendix A.
Against this background, we propose that lay beliefs about the level of AI compared to human intelligence may influence adoption of algorithmic advice because they serve as cues for accuracy of advice. Studies on advice suggest that maximizing decision accuracy is a main motive of decision makers (Schrah et al., 2006). That is, individuals may seek and use advice because they want to make as accurate decisions as possible. However, because it is usually difficult for individuals to evaluate accuracy in advance, they may rely on heuristic cues to infer accuracy (Bonaccio & Dalal, 2006). In the context of algorithmic advice, lay beliefs about AI may represent an important cue. Specifically, consumers who believe that AI is higher than human intelligence may expect to receive more accurate advice and may therefore expect to make better decisions when using algorithmic advice. Hence, they may be more motivated to use algorithmic advice. In contrast, consumers who believe that AI is inferior to human intelligence may not expect to receive more accurate advice. Consequently, they may be less likely to adopt algorithmic advice. Thus,
  • H1: Consumers believing that AI is higher than human intelligence will be more likely to adopt algorithmic advice than consumers who do not believe that AI is higher than human intelligence.

2.2 Perceived complexity

Scholars have emphasized that consumers tend to perceive professional services as complex (Mikolon et al., 2015). That is, consumers may regard services such as financial, architectural, or legal advice as complex because they deal with decision problems that involve multiple different, interdependent, and sometimes ambiguous aspects (Campbell, 1988; Dellaert et al., 2012). However, as complexity occurs on the task level rather than on the job level (Campbell, 1988), one may also argue that perceived complexity varies depending on the task. For example, consumers may regard developing an investment strategy a more complex task than finding the right credit card as it requires considering a greater number of product alternatives. Importantly, research suggests that individuals respond more favorably to (human) advice when they perceive a high degree of complexity (Gino & Moore, 2007; Schrah et al., 2006). To explain this effect, it has been argued that individuals facing a complex task want to reduce their cognitive effort but still obtain a high level of decision accuracy (Schrah et al., 2006).
Based on these findings, one may assume that the effect of lay beliefs on adoption of algorithmic advice is moderated by perceived complexity. When consumers perceive a high level of task complexity, lay beliefs about AI may strongly affect adoption of algorithmic advice. That is, individuals may feel that a high level of intelligence is needed when addressing a decision problem which requires analyzing a multitude of different, interdependent aspects. Consequently, consumers who believe that AI is higher than human intelligence may feel that they receive more accurate advice. In contrast, when consumers perceive a low level of complexity, lay beliefs about AI may not exert a similar impact. In this case, consumers may assume that a task can be solved by using simple rules. Hence, they may not consider intelligence a crucial factor to increase accuracy of advice. That is, they may feel that advice is equally accurate regardless of whether AI is lower or higher than human intelligence. As a result, they may not be more motivated to seek algorithmic advice when they believe that AI is higher than human intelligence. Hence,
  • H2: Consumers believing that AI is higher than human intelligence will only be more likely to adopt algorithmic advice when perceived task complexity is high.

3 Study 1

3.1 Methodology

In study 1, we cooperated with a Swiss retail bank to examine the impact of lay beliefs about AI. The bank had introduced a new investment advice service (“robo-advisor”) and allowed us to include a short questionnaire in an e-mailing which was sent to 3656 customers and stated that the bank wanted to get their customers’ opinion about the bank. The mailing did not make any reference to AI or investment advice. Participants were directed to a website where they were exposed to our survey questions. Next, they were asked to watch a short video about the new service. After the video, participants could register for the new service. In total, 454 customers answered the survey questions and watched the video (72.7% male, average age: 53.1 years).
We measured lay beliefs about AI with two items (artificial intelligence is superior to human intelligence in many areas/artificial intelligence is better able to solve complex problems than human intelligence; r = 0.60). To control for extraneous influence, we included a three-item measure of satisfaction with the bank (α = 0.91) from Mende and Bolton (2011). Moreover, we included several items from previous research on technology adoption as control measures such as need for interaction, convenience, and technical anxiety (e.g., Collier & Kimes, 2013; Meuter et al., 2005). We also asked individuals for their total assets (ordinal scale with five categories) and assessed if they had a personal client advisor or not. Unless stated otherwise, all items of this study and of the other studies used 7-point scales labeled strongly disagree (1) and strongly agree (7). All items can be found in Web Appendix D.
As a measure of adoption of algorithmic advice, we assessed whether participants had registered for the investment advice service. Hence, a dichotomous choice measure served as dependent variable (0 = did not register, 1 = registered). In total, 74 customers registered for the service.

3.2 Results

To examine the impact of lay beliefs about AI, we conducted logistic regression analyses. Satisfaction, convenience, total assets, and personal client advisor emerged as significant predictors and were included in the analyses. To facilitate interpretation, we z-standardized all predictor variables. We estimated three different models that predicted adoption of algorithmic advice by lay beliefs only (base model), by the control variables only (controls model), and by all predictors conjointly (full model). Testing these models allowed us to analyze the robustness of the effect of lay beliefs about AI and to examine whether a model which included lay beliefs better predicted adoption of algorithmic advice than a model which did not include lay beliefs about AI. Table 1 provides an overview of the different models. As expected, lay beliefs had a significant positive impact in the base model (Wald χ2(1) = 11.30, p < 0.001) and in the full model (Wald χ2(1) = 7.12, p = 0.01). Moreover, the full model (Nagelkerke R2 = 0.16) better predicted adoption of algorithmic advice than the controls model (Nagelkerke R2 = 0.14). Following the literature (Osborne, 2015), we also calculated predicted probabilities of adoption of algorithmic advice for participants believing in the superiority of AI (+ 1 SD, 17.9%) and participants not believing in the superiority of AI (− 1 SD, 9.1%) based on the regression line equation of the full model. In sum, these findings support H1.
Table 1
Study 1: binary logistic regression model results
 
Base model
Controls model
Full model
Predictor
B
Wald
p
Exp(B)
B
Wald
p
Exp(B)
B
Wald
p
Exp(B)
Lay beliefs about AI
0.46
11.30
 < 0.001
1.59
    
0.39
7.12
0.01
1.47
Satisfaction
    
0.47
7.25
0.01
1.60
0.43
5.95
0.01
1.54
Convenience
    
0.53
12.92
 < 0.001
1.70
0.52
12.41
 < 0.001
1.69
Client advisor
    
 − 0.32
6.62
0.01
0.73
 − 0.26
4.43
0.04
0.77
Total assets
    
0.37
6.00
0.01
1.44
0.37
5.97
0.01
1.45
(Constant)
 − 1.71
159.77
 < 0.001
0.18
 − 1.87
147.63
 < 0.001
0.15
 − 1.91
146.56
 < 0.001
0.15
χ2(df)
12.06(1), p < 0.001
38.46(4), p < 0.001
45.92(5), p < 0.001
Nagelkerke R2
0.04
0.14
0.16
Nagelkerke R2 is a goodness-of-fit measure for a logistic regression model that approximates the R2 for linear regression
Study 1 provides initial evidence that individuals who believe that AI is higher than human intelligence are more likely to adopt algorithmic advice in a real market setting. However, there are also some challenges. First, typical of a field context, restrictions regarding the overall length of the survey allowed us to measure lay beliefs about AI only with two items. Second, our results are correlational in nature as we did not manipulate lay beliefs about AI.

4 Study 2

4.1 Methodology

In study 2, we wanted to replicate the findings of study 1. To increase the generalizability of our results, we focused on a different advice context, namely, interior design. Unlike investment advice, developing an interior design concept may be considered a more subjective task, requiring intuition and a sense of style in addition to analytical skills. A total of 73 students taking part in postgraduate classes at a Swiss university participated in the study (74.0% male, average age: 31.1 years). To evoke lay beliefs about AI, we asked participants to read a short interview with a professor purportedly extracted from a newspaper intended to trigger the belief that AI was higher or lower than human intelligence (see Web Appendix for all stimulus materials). Next, participants were presented with a screenshot from an alleged “robo-interior design service” which offered advice on developing a personalized furnishing concept and selecting the right furniture. Finally, they completed the control and dependent measures.
Intentions to adopt algorithmic advice and expected accuracy of advice were measured with three-item scales adapted from Venkatesh et al. (2012; α = 0.92) and Wixom and Todd (2005; α = 0.71). We also assessed whether lay beliefs about AI had been manipulated effectively (two-item scale from study 1, r = 0.63) and all control variables from study 1 except satisfaction with bank and client advisor (note that we used household income as a proxy of wealth in this study). In addition, we measured inertia with a single item adapted from Meuter et al. (2005). Arguably, adoption of algorithmic advice would have required more effort in study 2 as it would have involved starting a relationship with a new service provider.

4.2 Results

Participants who read the interview stating that AI was superior to human intelligence believed more strongly that AI was higher than human intelligence (M = 5.21) compared to participants who read that AI was inferior to human intelligence (M = 3.59; F(1,71) = 35.07, p < 0.001). Convenience emerged as a significant covariate. All other variables did not emerge as significant covariates and were thus excluded from the analyses.
A one-way ANOVA revealed a significant effect for lay beliefs (F(1,70) = 4.26, p = 0.04). Participants who believed that AI was higher than human intelligence were more willing to adopt algorithmic advice (M = 5.30) than participants who did not have this belief (M = 4.80). These results provide further support for H1.
To examine the underlying process, we conducted mediation analysis with bootstrapping (Hayes, 2018; model 4). Expected accuracy of advice mediated the effect of lay beliefs on intention to adopt algorithmic advice [0.15, 95% CI = (0.0165, 0.3285)]. There was no direct effect of lay beliefs [0.15, 95% CI = (− 0.1233, 0.4265)].

5 Study 3

5.1 Methodology

The aim of study 3 was to test H2. In total, 227 Swiss consumers between 18 and 69 years were recruited by a direct marketing company and participated in the study (52.0% male, average age: 57.4 years). Participants were asked to indicate their lay beliefs about AI and were then exposed to a screenshot from an alleged new financial advice service called Robowealth. Perceived complexity was manipulated by varying the decision task. That is, participants were either exposed to a task of high complexity with many options (i.e., developing an optimal investment strategy) or to a task of lower complexity with less options (i.e., finding the right savings account). This setting allowed for a conservative test of our hypothesis as providing advice on a savings account may still be considered a relatively complex task. Next, participants completed the control and dependent measures.
To assess lay beliefs about AI, we adapted an eight-item scale from Truxillo et al., (2012, α = 0.80) and asked participants to evaluate different abilities of AI in comparison to human intelligence (e.g., abstract reasoning, processing of information).2 We used the same scales as in study 2 to measure intentions to adopt algorithmic advice (α = 0.89) and expected accuracy (α = 0.83). As a manipulation check, participants indicated on a seven-point scale whether the task on which Robowealth offered advice was simple (1) or complex (7). Finally, we included the same control variables as in study 2.

5.2 Results

Participants in the high complexity condition rated the task as more complex than participants in the low complexity condition (Mlow complexity = 4.06, Mhigh complexity = 4.58; F(1, 225) = 5.98, p = 0.02). Convenience and inertia emerged as significant covariates and were included in the analyses.
We mean-centered the lay beliefs about AI variable and conducted an OLS regression. This analysis revealed a significant main effect of lay beliefs about AI (b = 0.23, t = 2.36, p = 0.02), an insignificant effect of perceived complexity (b = 0.07, t = 0.79, p = 0.43), and significant effects of the control variables (convenience: b = 0.15, t = 2.67, p = 0.01; inertia: b =  − 0.30, t =  − 5.69, p < 0.001). Importantly, the interaction between lay beliefs about AI and perceived complexity was significant (b = 0.18, t = 1.93, p = 0.05). To follow up on this effect, we performed planned contrasts and compared participants believing that AI is higher than human intelligence (i.e., one 1 SD above the mean) to participants not believing that AI is higher than human intelligence (i.e., one 1 SD below the mean) within the two complexity conditions (see Fig. 2). When the task was perceived to be complex, participants had stronger intentions towards adopting algorithmic advice when they believed that AI was higher than human intelligence (MAI intellectually superior = 3.69, MAI not intellectually superior = 2.84; b = 0.40, t = 3.50, p < 0.001). When participants perceived the task to be less complex, intentions towards adopting algorithmic advice did not vary as function of lay beliefs about AI (MAI intellectually superior = 3.17, MAI not intellectually superior = 3.07; b = 0.05, t = 0.34, p = 0.74). These findings support H2.
Next, we investigated the underlying process. Figure 1 proposes a case of moderated mediation, in which perceived complexity of the task moderates the effect of lay beliefs about AI on expected accuracy which, in turn, affects adoption of algorithmic advice. Using the bootstrapping method (Hayes, 2018; model 7), we found that the indirect effect of lay beliefs about AI on behavioral intentions via accuracy was significant when the task was of high perceived complexity [0.34, 95% CI = (0.1940, 0.5215)] but not significant when the task was of low perceived complexity [0.09, 95% CI = (-0.0943, 0.2665)], indicating a mediation effect of expected accuracy. The confidence interval for the index of moderated mediation did not include zero [0.25, 95% CI = (0.0385, 0.5172)], indicating that the mediation is moderated.

6 General discussion

The purpose of this research was to investigate how lay beliefs about AI affect adoption of experts systems offering algorithmic advice to consumers in professional service contexts. Across different operationalizations (i.e., measuring consumers’ lay beliefs, temporarily activating lay beliefs) and different advice settings (i.e., financial advice, interior design advice), we demonstrated a robust effect of lay beliefs on adoption of algorithmic advice. Moreover, we identified task complexity as an important boundary condition and showed that lay beliefs about AI only had a significant impact on adoption of algorithmic advice when the decision task was perceived to be complex.

6.1 Theoretical implications and future research

These findings contribute to research on algorithmic advice in professional services. In this field, several studies have found that people reject algorithmic advice (Schmitt, 2019), while others have found that people “readily rely on algorithmic advice” (Logg et al., 2019, p. 99). Our findings may help to partially reconcile these views. That is, we show that people who believe that AI is superior to human intelligence may rely more heavily on algorithmic advice, while people who believe that AI is inferior to human intelligence may tend to reject algorithmic advice. More specifically, our research shows that people use their lay beliefs about AI as a heuristic to infer accuracy of advice when they do not have sufficient information and/or domain expertise to evaluate the quality of algorithmic advice, a situation typical of many professional advice contexts.
We also contribute to the literature by showing that lay beliefs about AI only influence adoption of algorithmic advice when a task is perceived to be complex. This finding extends previous research (Castelo et al., 2019) which found that algorithm aversion was less pronounced when people perceived a task to be objective, that is, when they felt that it required quantitative analysis rather than intuition. Specifically, our findings suggest that consumers may be more willing to use algorithmic advice for an objective task (e.g., selecting a financial product) when they believe that AI is superior to human intelligence and when the task is perceived to be complex. Moreover, the results of study 2 tentatively suggest that consumers may also be willing to use algorithmic advice for more intuitive, complex tasks (e.g., finding an interior design which matches their style) when they believe in the superiority of AI. In sum, our research provides a more fine-grained analysis of adoption of algorithmic advice.
While contributing to existing research, our studies also have some limitations that call for future research. First, we investigated the impact of lay beliefs about AI in the context of financial advice and interior design. However, in other contexts such as medical advice, psychological distress may be higher, and factors such as human uniqueness may be of greater relevance. Hence, it may be interesting to investigate the impact of lay beliefs about AI in such contexts. Second, it would be worthwhile to examine why people develop different lay beliefs about AI in the first place. For instance, people who are used to different types of AI applications (e.g., robots, chatbots) may develop different lay beliefs about AI. Similarly, individual differences in speciesism (i.e., a fundamental bias toward the human species) may favor the development of different lay beliefs about AI (Schmitt, 2020).

6.2 Managerial implications

Our findings also have clear managerial implications. The most straightforward one is that professional service firms that want to offer algorithmic advice need to recognize the different beliefs their customers have about AI. For example, 46.9% of bank customers participating in study 1 tended to believe that AI is higher than human intelligence (i.e., they had a value above the midpoint of the scale), but 34.6% tended to oppose this idea (i.e., they had a value below the midpoint of the scale). Hence, managers may segment their customers according to different lay beliefs about AI and offer different segments different types of advice.
If service companies, however, want to increase general adoption of algorithmic advice, they may try to influence their customers’ lay beliefs about AI. To this end, they may, for example, inform customers about the data processing capabilities of AI or provide statistics about the predictive superiority of AI-enabled services.
Finally, on a more general level, our research may help managers to decide which types of advice they should automate. Whereas in practice companies often seem to focus on automation of advice related to simple tasks (e.g., advice on standardized products such as car insurance), our research indicates that automation of advice related to complex subjects (e.g., advice on individualized health care plans) may also be a promising endeavor when a substantial part of customers believes that AI is higher than human intelligence. Assuming that in the future more consumers will believe in the superiority of AI, professional service companies who consequently automate complex services may gain competitive advantage.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Supplementary Information

Below is the link to the electronic supplementary material.
Fußnoten
1
We conducted four consumer focus groups (n = 29, average age 45.8 years, 75.9% male). Each group consisted of customers of different Swiss banks. Because individuals with and without experience with algorithmic advice may have different beliefs about AI, two focus groups (n = 16) were held with customers using algorithmic advice (“robo-advice”) and two focus groups (n = 13) were held with customers relying on personal advice. Groups lasted between 60 and 90 minutes. All groups were recorded and transcribed, which resulted in 144 pages of verbatim transcripts.
 
2
Note that we also measured lay beliefs about AI using the two-item measure from study 1. This led to similar results.
 
Literatur
Zurück zum Zitat Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127–151.CrossRef Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127–151.CrossRef
Zurück zum Zitat Campbell, D. J. (1988). Task complexity: A review and analysis. Academy of Management Review, 13(1), 40–52.CrossRef Campbell, D. J. (1988). Task complexity: A review and analysis. Academy of Management Review, 13(1), 40–52.CrossRef
Zurück zum Zitat Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.CrossRef Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.CrossRef
Zurück zum Zitat Collier, J. E., & Kimes, S. E. (2013). Only if it is convenient: Understanding how convenience influences self-service technology evaluation. Journal of Service Research, 16(1), 39–51.CrossRef Collier, J. E., & Kimes, S. E. (2013). Only if it is convenient: Understanding how convenience influences self-service technology evaluation. Journal of Service Research, 16(1), 39–51.CrossRef
Zurück zum Zitat Dellaert, B. G. C., Donkers, B., & van Soest, A. (2012). Complexity effects in choice experiment–based models. Journal of Marketing Research, 49(3), 424–434.CrossRef Dellaert, B. G. C., Donkers, B., & van Soest, A. (2012). Complexity effects in choice experiment–based models. Journal of Marketing Research, 49(3), 424–434.CrossRef
Zurück zum Zitat Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology, 144(1), 114–126.CrossRef Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology, 144(1), 114–126.CrossRef
Zurück zum Zitat Furnham, A. (2001). Self-estimates of intelligence: Culture and gender difference in self and other estimates of both general (g) and multiple intelligences. Personality and Individual Differences, 31(8), 1381–1405.CrossRef Furnham, A. (2001). Self-estimates of intelligence: Culture and gender difference in self and other estimates of both general (g) and multiple intelligences. Personality and Individual Differences, 31(8), 1381–1405.CrossRef
Zurück zum Zitat Gino, F., & Moore, D. A. (2007). Effects of task difficulty on use of advice. Journal of Behavioral Decision Making, 20(1), 21–35.CrossRef Gino, F., & Moore, D. A. (2007). Effects of task difficulty on use of advice. Journal of Behavioral Decision Making, 20(1), 21–35.CrossRef
Zurück zum Zitat Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). The Guilford Press. Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). The Guilford Press.
Zurück zum Zitat Hildebrand, C., & Bergner, A. (2021). Conversational robo advisors as surrogates of trust: Onboarding experience, firm perception, and consumer financial decision making. Journal of the Academy of Marketing Science, 49(4), 659–676.CrossRef Hildebrand, C., & Bergner, A. (2021). Conversational robo advisors as surrogates of trust: Onboarding experience, firm perception, and consumer financial decision making. Journal of the Academy of Marketing Science, 49(4), 659–676.CrossRef
Zurück zum Zitat Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172.CrossRef Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172.CrossRef
Zurück zum Zitat Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151(2), 90–103.CrossRef Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151(2), 90–103.CrossRef
Zurück zum Zitat Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.CrossRef Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.CrossRef
Zurück zum Zitat Mende, M., & Bolton, R. N. (2011). Why attachment security matters: How customers’ attachment styles influence their relationships with service firms and service employees. Journal of Service Research, 14(3), 285–301.CrossRef Mende, M., & Bolton, R. N. (2011). Why attachment security matters: How customers’ attachment styles influence their relationships with service firms and service employees. Journal of Service Research, 14(3), 285–301.CrossRef
Zurück zum Zitat Meuter, M. L., Bitner, M. J., Ostrom, A. L., & Brown, S. W. (2005). Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies. Journal of Marketing, 69(2), 61–83.CrossRef Meuter, M. L., Bitner, M. J., Ostrom, A. L., & Brown, S. W. (2005). Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies. Journal of Marketing, 69(2), 61–83.CrossRef
Zurück zum Zitat Mikolon, S., Kolberg, A., Haumann, T., & Wieseke, J. (2015). The complex role of complexity: How service providers can mitigate negative effects of perceived service complexity when selling professional services. Journal of Service Research, 18(4), 513–528.CrossRef Mikolon, S., Kolberg, A., Haumann, T., & Wieseke, J. (2015). The complex role of complexity: How service providers can mitigate negative effects of perceived service complexity when selling professional services. Journal of Service Research, 18(4), 513–528.CrossRef
Zurück zum Zitat Osborne, J. W. (2015). Best practices in logistic regression. SAGE Publications.CrossRef Osborne, J. W. (2015). Best practices in logistic regression. SAGE Publications.CrossRef
Zurück zum Zitat Petrides, K. V., Furnham, A., & Martin, G. N. (2004). Estimates of emotional and psychometric intelligence: Evidence for gender-based stereotypes. The Journal of Social Psychology, 144(2), 149–162.CrossRef Petrides, K. V., Furnham, A., & Martin, G. N. (2004). Estimates of emotional and psychometric intelligence: Evidence for gender-based stereotypes. The Journal of Social Psychology, 144(2), 149–162.CrossRef
Zurück zum Zitat Promberger, M., & Baron, J. (2006). Do patients trust computers? Journal of Behavioral Decision Making, 19(5), 455–468.CrossRef Promberger, M., & Baron, J. (2006). Do patients trust computers? Journal of Behavioral Decision Making, 19(5), 455–468.CrossRef
Zurück zum Zitat Sampson, S. E. (2021). A strategic framework for task automation in professional services. Journal of Service Research, 24(1), 122–140.CrossRef Sampson, S. E. (2021). A strategic framework for task automation in professional services. Journal of Service Research, 24(1), 122–140.CrossRef
Zurück zum Zitat Schmitt, B. (2019). From atoms to bits and back: A research curation on digital technology and agenda for future research. Journal of Consumer Research, 46(4), 825–832.CrossRef Schmitt, B. (2019). From atoms to bits and back: A research curation on digital technology and agenda for future research. Journal of Consumer Research, 46(4), 825–832.CrossRef
Zurück zum Zitat Schmitt, B. (2020). Speciesism: An obstacle to AI and robot adoption. Marketing Letters, 31(1), 3–6.CrossRef Schmitt, B. (2020). Speciesism: An obstacle to AI and robot adoption. Marketing Letters, 31(1), 3–6.CrossRef
Zurück zum Zitat Schrah, G. E., Dalal, R. S., & Sniezek, J. A. (2006). No decision-maker is an island: Integrating expert advice with information acquisition. Journal of Behavioral Decision Making, 19(1), 43–60.CrossRef Schrah, G. E., Dalal, R. S., & Sniezek, J. A. (2006). No decision-maker is an island: Integrating expert advice with information acquisition. Journal of Behavioral Decision Making, 19(1), 43–60.CrossRef
Zurück zum Zitat Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of Personality and Social Psychology, 49(3), 607–627.CrossRef Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of Personality and Social Psychology, 49(3), 607–627.CrossRef
Zurück zum Zitat Truxillo, D. M., McCune, E. A., Bertolino, M., & Fraccaroli, F. (2012). Perceptions of older versus younger workers in terms of big five facets, proactive personality, cognitive ability, and job performance. Journal of Applied Social Psychology, 42(11), 2607–2639.CrossRef Truxillo, D. M., McCune, E. A., Bertolino, M., & Fraccaroli, F. (2012). Perceptions of older versus younger workers in terms of big five facets, proactive personality, cognitive ability, and job performance. Journal of Applied Social Psychology, 42(11), 2607–2639.CrossRef
Zurück zum Zitat Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178.CrossRef Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178.CrossRef
Zurück zum Zitat Wixom, B. H., & Todd, P. A. (2005). A theoretical integration of user satisfaction and technology acceptance. Information Systems Research, 16(1), 85–102.CrossRef Wixom, B. H., & Todd, P. A. (2005). A theoretical integration of user satisfaction and technology acceptance. Information Systems Research, 16(1), 85–102.CrossRef
Metadaten
Titel
The impact of lay beliefs about AI on adoption of algorithmic advice
verfasst von
Benjamin von Walter
Dietmar Kremmel
Bruno Jäger
Publikationsdatum
24.08.2021
Verlag
Springer US
Erschienen in
Marketing Letters / Ausgabe 1/2022
Print ISSN: 0923-0645
Elektronische ISSN: 1573-059X
DOI
https://doi.org/10.1007/s11002-021-09589-1

Weitere Artikel der Ausgabe 1/2022

Marketing Letters 1/2022 Zur Ausgabe