1 Introduction
2 Theoretical background and hypotheses development
2.1 Performance and perceptions of AI
2.2 Transparent AI triggers algorithm aversion
2.3 Transparent AI triggers perceived loss of control
2.4 Human-AI collaboration as possible escape to negative consumer responses to AI
Research streams | Source | Topic | Literature field(s) | Mediator(s), Moderator(s) | Main findings |
---|---|---|---|---|---|
Conceptual | Davenport et al. (2020) | Multidimensional framework which integrates AI intelligence levels, task types and appearance | Marketing | – | AI will influence marketing strategies and consumer behavior in manifold ways. Authors suggest that human-AI collaboration is more effective than human replacement, and ethical issues need to be considered cautiously |
Huang and Rust (2022) | Conceptual framework for collaborative AI in marketing | Marketing | – | AI advances from mechanical, to thinking, to feeling intelligence. Human-AI collaboration can be achieved through (1) using the respective strengths of human or AI, (2) using lower-level AI to augment higher-level human intelligence, or (3) using AI to automate lower intelligence processes and humans focus on higher intelligence tasks. Possible boundary conditions such as task (un-)desirability are discussed | |
Paschen et al. (2020) | Conceptual framework for human-AI collaboration based on B2B sales funnel | Marketing, Sales | – | AI could add significant value in every stage of the sales funnel. Specific contributions of AI and humans in the collaboration are outlined, and benefits for business are derived | |
Raftopoulos and Hamari (2023) | Business potentials for value creation due to human-AI collaboration | IS, general management | – | Human-AI collaboration offers substantial potential to create sustainable business value. Authors posit four main enablers for successful value creation: strategic positioning, human engagement, organizational evolution and technology development (including user acceptance to overcome AI aversion or AI anxiety) | |
Sundar (2020) | Framework regarding the psychology of Human–AI interaction | Media, HCI | – | The article proposes a framework to investigate synergies of human agency and machine agency. It presents symbolic and enabling effects of AI-driven media affordances on user perceptions, experiences and engagement with AI medium. The author suggests that human-AI co-creation of media will significantly shape these mentioned outcomes | |
Zanzotto et al. (2019) | Human-in-the-loop paradigm as collaborative format in the economy | AI research | – | AI has disruptive potential for economy and labor market due to self-learning characteristics, human-AI collaboration as proposed system helps human knowledge creators to participate in AI-driven revenues and wealth | |
Zhou et al. (2021) | Conceptual framework for symbiotic human-AI collaboration | IT, HCI | – | Intelligence Augmentation (IA), meaning that AI is used to amplify human abilities is outlined as recommended and highly likely format of human-AI collaboration. IA is expected to increase business’ competitiveness and value | |
Empirical | Fan et al. (2022) | AI tools as support for user experience evaluations | IT, HCI | – | AI-assisted user experience (UX) assessments increased efficiency. AI was particularly more supportive when the AI work is presented with explanations (i.e., how AI identified UX problems) |
Fügener et al. (2022) | Productive delegation of tasks in human-AI collaboration | IS, HCI, general management | – | Human-AI collaboration could outperform AI, which outperforms humans in classification tasks. However, combined performance is only improved when AI delegates work to humans (and not vice versa), because humans could not assess their own capabilities correctly | |
Lai et al. (2021) | Literature review on human-AI collaboration in healthcare | Healthcare, Medical | – | Literature on healthcare-related collaboration discusses different AI areas and technologies, focus on specific diseases, different outcomes (e.g., task performance, completion time, learning, usability, acceptance) and stakeholders involved (e.g., healthcare professionals, patients, clinical researchers) | |
Longoni et al. (2019) | Consumer responses to human-AI collaborative medical advice | Healthcare, medical | Uniqueness neglect | Human-AI collaboration in giving medical advice eliminates individuals’ resistance to accept AI-advice. Thus, AI acceptance is given when it supports (vs. replaces) a human doctor | |
Schleith et al. (2022) | Human-AI collaboration regarding information extraction systems | Information systems | – | Installing specific forms of human-AI collaboration (i.e., manual-rule-based-, black-box-review) offers users more understanding and perceived control, which leads to higher trust and joint task efficiency | |
Sowa et al. (2021) | Productivity due to human-AI collaboration in managerial tasks | Management | – | Two studies confirm that human-AI collaboration increases perceived productivity. Moreover, people are generally ready to use AI in managerial professions and that collaborative tools should be personalized. Higher sense of agency and ability to influence tool design is suggested to improve the collaboration | |
Waddell (2019) | Human-AI collaboration effects on readers’ article credibility evaluations | Journalism | Med.: Perceived bias, Anthropomorphism | Author find two mediation paths: human-AI collaboration for news creation was perceived as less biased (vs. human author), leading to positive effects on article (message) credibility. However, collaborative format was also perceived as less anthropomorphic than a human author, creating negative effects on message credibility. Topic type or story context were not significantly influencing these effects | |
Wang et al. (2022) | Human-AI collaboration in code documentation as part of data science | Data Science | – | AI-assisted coding and code documentation supports human researcher and leads to improved quality and productivity. In particular, human-in-the-loop (and similar) design principles support these goals | |
Wölker and Powell (2018) | Perceived article credibility and news selection of automated journalism | Journalism | Med: message credibility, source credibility, Mod.: Article type | Credibility perceptions were not significantly different between human, AI, or human-AI collaborative authorship formats. Moreover, news selection is not influenced by message or source credibility | |
Our study | Consumer perceptions of two forms of Human-AI collaborations on firm evaluation | Marketing, management | Med.: Message credibility, Mod.: perceived morality of companies’ AI use | Human-AI collaboration can alleviate negative consumer responses regarding AI use, but only when the collaboration indicates human control over AI (vs. AI supporting a human). When consumers perceive a high morality towards companies’ AI use, negative effects of any author type vanish |
2.4.1 The relationship between different forms of human-AI collaboration, message credibility, and attitude towards the company
2.5 The moderating role of morality of AI use
3 Study 1
3.1 Participants and procedure
Construct name and items | Standardized loadings | |
---|---|---|
Study 1 | Study 2 | |
Message Credibility (Study 1/Study 2: α = .88/.91; CR = .88/.86; AVE = .65/.61) | ||
This text … | ||
… is generally truthful | 0.75 | 0.75 |
… leaves one feeling accurately informed | 0.79 | 0.74 |
… is believable | 0.84 | 0.85 |
… is authentic | 0.83 | 0.77 |
Attitude towards the company (Study 1/Study 2: α = .95/.89; CR = .92/.85; AVE = .79/.66) | ||
This company is a good company | 0.88 | 0.85 |
This company is a nice company | 0.90 | 0.84 |
I like the company | 0.89 | 0.74 |
Morality of AI use (Study 2: α = .93; CR = .93; AVE = .77) | ||
Companies using artificial intelligence (AI) in marketing texts are… | ||
Cruel (1) versus Kind-hearted (7) | 0.88 | |
Immoral (1) versus Moral (7) | 0.90 | |
Uncaring (1) versus Caring (7) | 0.83 | |
Unethical (1) versus Ethical (7) | 0.89 |
3.2 Results
4 Study 2
4.1 Participants and procedure
4.2 Results
Mediator | Outcome | |||
---|---|---|---|---|
Message credibility | Attitude towards the company | |||
b | t | b | t | |
X1: Human supported by AI versus Human | − 1.72 | − 2.07* | − 0.48 | − 0.71n.s |
X2: AI controlled by Human versus Human | 0.39 | 0.47n.s | − 0.71 | − 1.07n.s |
X3: AI versus Human | − 2.93 | − 3.33** | − 0.01 | − 0.01n.s |
W: Morality of AI use | 0.40 | 3.22** | − 0.03 | − 0.26n.s |
M: Message credibility | − | − | 0.70 | 12.48*** |
X1*W | 0.22 | 1.35n.s | 0.06 | 0.44n.s |
X2*W | − 0.12 | − 0.75n.s | 0.12 | 0.90n.s |
X3*W | 0.42 | 2.38* | − 0.01 | − 0.07n.s |
COV: Age | 0.01 | 0.80n.s | − 0.00 | − 0.50n.s |
COV: Gender | − 0.13 | − 0.91n.s | 0.00 | 0.02n.s |
COV: Industry type | 0.12 | 0.79n.s | 0.20 | 1.70 |
Morality | b | Lower | Upper | |
---|---|---|---|---|
X1: Human supported by AI versus Human | 3.67 | − 0.64 | − 1.19 | − 0.09 |
5.02 | − 0.43 | − 0.73 | − 0.15 | |
6.37 | − 0.23 | − 0.53 | 0.06 | |
X2: AI controlled by Human versus Human | 3.67 | − 0.03 | − 0.59 | 0.43 |
5.02 | − 0.15 | − 0.42 | 0.11 | |
6.37 | − 0.26 | − 0.54 | 0.03 | |
X3: AI versus Human | 3.67 | − 0.96 | − 1.47 | − 0.47 |
5.02 | − 0.56 | − 0.86 | − 0.27 | |
6.37 | − 0.16 | − 0.47 | 0.17† |