Introduction
Authors | Context | Discussed Effects of Anthropomorphism | Key Dependent Variables Discussed | Relevant Findings & Insights |
---|---|---|---|---|
Foehr & Germelmann, 2020 | AIAs | Beneficial | Trust in the AIA | Alongside other paths to trust, this study shows that AIAs’ humanlike personality is a path to foster trust in the consumer. |
Li & Sung, 2021 | AIAs | Beneficial | Psychological distance with the AIA | When anthropomorphism of the AI assistant was high, participants had more positive attitudes toward the AI assistant, mediated by psychological distance with the AIA. |
Pitardi & Marriott, 2021 | AIAs | Beneficial | Trust in the AIA, Attitude towards AIA | This study shows that social presence of the AIAs improved the consumers’ attitude towards the AIA through increasing trust felt towards the AIA. |
Moriuchi, 2021 | AIAs | Beneficial | Intention to Adopt | Greater perceived anthropomorphism of the AIA predicted higher intention to adopt the technology through greater engagement of the consumer. |
Mishra et al., 2021 | AIAs | Beneficial | Attitude, Usage, Satisfaction | Anthropomorphism was positively related to utilitarian attitudes held towards the AIAs. Mediated by utilitarian attitudes, greater perceived anthropomorphism of the AIA increased frequency of AIA use and satisfaction with the AIA. |
Benlian et al., 2020 | AIAs | Beneficial | Psychological Strain | Anthropomorphic design features can mitigate the adverse effects of the invasion of users’ privacy created through invasive AIA features. |
Poushneh, 2021 | AIAs | Beneficial | Consumer Satisfaction | Anthropomorphic personality features in the AIA increased consumers’ perceived control over the AIA, which in turn indirectly increased consumer satisfaction with the AIA and consumers’ willingness to continue to use the AIA. |
Ferrari et al., 2016 | Social robots | Harmful | Threat to human identity | High degree of anthropomorphism in robots leads to undermining human-machine distinctiveness and subsequently to a feeling of threat to human identity. |
Mende et al., 2019 | Humanoid service robots | Harmful | Threat to human identity | Highly anthropomorphized humanoid service robots lead to a feeling of threat to human identity. |
Appel et al., 2020 | Humanoid service robots | Harmful | Feeling of eeriness | A robot’s capacity to feel (but not agency) elicits stronger feelings of eeriness. |
Stein & Ohler, 2017 | Virtual agents in virtual reality | Harmful | Feeling of eeriness | Emotionally anthropomorphic autonomous virtual agents (in VR) elicit stronger eeriness in participants, by blurring human-machine distinctiveness. |
Our Paper | AIAs | Beneficial and Harmful | Trust in the AIA, Threat to Human Identity, Consumer Well-being, Consumer Empowerment | Anthropomorphism elicits consumer trust, but also a human identity threat. This identity threat reduces consumer well-being through reducing consumer AI empowerment. Raising consumers’ awareness about privacy issues and solutions and urging them to take action for their privacy alleviates the harmful effect of identity threat on empowerment. |
Preliminary insights on consumers’ relationships with artificial intelligence assistants
Preliminary consumer survey
Preliminary interviews
Category | Representative Quotes |
---|---|
Benefits of Consumer-AIA relationships | “Alexa gives me more free time. It is like she is supporting me […]” (F, 14:53) “She is powerful because she has many functionalities and can do many things […] but she is not powerful in the traditional sense so that she is controlling me or prescribing me things” (E; 26:54) |
Ambivalent Perceptions of Consumer-AIA relationship | “I do like it because it is helpful but sometimes some strange things happen. Sometimes it starts singing without anything happening around and you think what is going on? That is kind of strange but in general it is helpful (J, 3:31) “[I would describe it as an] ambivalent relationship because Alexa is great with regard to a family factor, especially the opportunity to play music via Alexa […]. But from a data protection perspective, I am critical towards it” (K, 4:03) “Yes, definitely […] I think some jobs will be replaced but many things get easier and they [AI] are opening new windows of opportunity” (E; 25:10) |
Negative Perceptions of Consumer-AIA relationship | Privacy Concerns “I know that there is a silent listener […] we have a big house […] but she can hear us talking in most of our rooms” (C; 29:20) “I would not say that an AI could threaten us, but I would say […] that they gather data that feed algorithms in order to create personalized advertisements. And this is data that is used to create capital […] So I don’t see a threat but the chance of exploiting it [the gathering of data]” (A, 10:00) “I do not even know what functions the device has....I think that I still have an uncertainty about how it works with the purchases and whether it is too unsafe […] I would say that the device does not always understand you well and as soon as it comes to shopping, for example, I am afraid to touch such functions.” (A, 3:36) “If I read something to my daughter and she [Alexa] talks in between, she is turned off. And if I feel to be controlled too much by her she is turned off as well. So there are certain situations where I think “this does not need to be on the tape” (K, 20:40) Human Identity Threat by & Fears “A threat is that the mankind is isolating itself and takes care of machines and maybe is going to communicate more with these […] the human gets lost” (I; 13:25) “AI is taking the upper hand […] Especially children and teenagers are on the cell phones. I think that there is a great danger […] because many of them become lonely” (C, 24:10) “[Alexa is] a little bit threatening. Because she is understanding certain things and you’re not thinking about it while speaking all the time […] After talking about it for a longer time you get aware of it” (J; 30:00) “All those technologies might change the structure of the society” (J, 39:00) “The role [of AI] is difficult because too many jobs are going to be lost as a result […] humans are being replaced too much by technology.” (G; 8:30) “If I look at the world of work and how much human intelligence is replaced by AI […] it is something that increases efficiency for companies, but ultimately costs human jobs.” (K, 15:40) |
Theoretical framework
Integrating mind perception theory and social exchange theory to build a user–AI relationship perspective
Beneficial effects of relationships with artificial intelligence assistants
Harmful effects of relationships with artificial intelligence assistants
Moderating effects of relationship characteristics
Empowering consumers in their relationships with artificial intelligence assistants
Study 1: Ambivalent effects of AIA anthropomorphism on user–AIA relationships
Measures
Variables | Measurement Items | Scale Evaluations | |
---|---|---|---|
Study 1 | Study 2 | ||
Relationship Length | What is the approximate month and year in which you acquired (or received) [Platform]? | – | – |
Consumer Satisfaction Ω (adapted from Garbarino & Johnson, 1999) | How would you rate your overall satisfaction with [Platform]? | – | – |
AIA Anthropomorphism ϕ (from Epley et al., 2008) | Please indicate how much you agree that [Platform] γ … …has a mind of its own. • …has intentions. • …has free will. • …has consciousness. • …experiences emotions. | α = .86 CR = .87 AVE = .57 | α = .90 CR = .91 AVE = .67 |
Trust in the AIA ϕ (adapted from Leach et al., 2007) | How much do you think the traits below describe [Platform]? • Honest • Sincere • Trustworthy | α = .92 CR = .91 AVE = .78 | α = .87 CR = .87 AVE = .69 |
Privacy Concerns ω (adapted from Smith et al., 1996) | Please indicate how much you agree with the following statements. • I am worried that [Platform] collects personal data from me. • It scares me that [Platform] measures personal data of me. • I am worried that [Platform] will get personal information from me. • It is scary for me that [Platform] takes personal data from me. | α = .95 CR = .95 AVE = .84 | α = .96 CR = .96 AVE = .87 |
Threat to Human Identity ω | Please indicate how much you agree with the following statements. • [Platform] seems to lessen the value of human existence. • I get the feeling that [Platform] could damage relations between people. • [Platform] could easily be used for evil (to fool, to harm, etc.) • [Platform] makes me worry about my place in the world. | α = .81 CR = .77 AVE = .47 | α = .81 CR = .82 AVE = .53 |
Consumer Wellbeing ψ (adapted from Bech et al., 2003) | Over the last two weeks… • I have felt cheerful and in good spirits. • I have felt calm and relaxed. • I have felt active and vigorous. • I woke up feeling fresh and rested. • My daily life has been filled with things that interest me. | α = .89 CR = .89 AVE = .62 | α = .91 CR = .92 AVE = .69 |
Consumer AI Empowerment ω (adapted from Spreitzer, 1995) | Please indicate how much you agree with the following statements. • I am confident about my ability to handle [Platform]. • I am self-assured about my capabilities to protect the privacy of my data that is collected by [Platform]. • I have mastered the skills to protect the privacy of my data that is collected by [Platform]. | – | α = .88 CR = .88 AVE = .71 |
Relationship Closeness with the AIA λ (adapted from Castelo et al., 2019) | • Do you think the Personal Organization tasks (such as making calls, sending messages, scheduling, reminders, alarms, setting daily goals, etc.) that you perform with [Platform] are more consequential or inconsequential for you? • Do you think the tasks related to getting Information (such as searching information, listening to the news, asking the weather conditions, etc.) that you perform with [Platform] are more consequential or inconsequential for you? • Do you think the Device Management tasks (such as controlling smart home gadgets, lights, thermostats, security camera, etc.) that you perform with [Platform] are more consequential or inconsequential for you? • Do you think the Purchase-related tasks (such as searching for products to buy, asking price information or recommendations, renewing your subscription, ordering and tracking deliveries, etc.) that you perform with [Platform] are more consequential or inconsequential for you? | α = .80 CR = .80 AVE = .50 | – |
Variables, Study 1 | (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) | (12) | (13) | (14) | ||
(1) AIA Anthropomorphism | ||||||||||||||||
(2) Trust | .28 | |||||||||||||||
(3) Identity Threat | .27 | −.12 | ||||||||||||||
(4) Privacy Concerns | −.08 | −.27 | .32 | |||||||||||||
(5) Satisfaction | .17 | .44 | −.12 | −.28 | ||||||||||||
(6) Wellbeing | .06 | .03 | −.04 | −.15 | .11 | |||||||||||
(7) Relationship Closeness | .06 | .07 | .12 | −.08 | .19 | .04 | ||||||||||
(8) Relationship Length | −.02 | −.05 | .11 | .06 | −.10 | −.10 | .02 | |||||||||
(9) Perc. Usefulness | .26 | .34 | .09 | −.06 | .49 | .05 | .23 | .01 | ||||||||
(10) Perc. Ease of Use | .04 | .28 | −.12 | −.07 | .38 | .02 | .12 | −.09 | .27 | |||||||
(11) Innovativeness | −.01 | .25 | −.06 | −.10 | .13 | .05 | .03 | .11 | .22 | .40 | ||||||
(12) Frequency of Use | .06 | .17 | .04 | .06 | .33 | .05 | .12 | .03 | .35 | .19 | .20 | |||||
(13) Age | .08 | −.01 | .01 | −.09 | −.09 | −.01 | −.05 | −.06 | −.14 | −.22 | −.14 | .02 | ||||
(14) Gender | −.08 | .02 | −.11 | −.03 | −.03 | .03 | .03 | −.02 | .01 | .02 | .29 | .12 | .04 | |||
M | 2.19 | 4.83 | 3.00 | 4.75 | 5.30 | 3.43 | 4.22 | 2.12 | 4.47 | 5.69 | 4.85 | 2.87 | 37.1 | – | ||
SD | 1.23 | 1.39 | 1.13 | 1.36 | 1.21 | 1.03 | 1.44 | .97 | 1.29 | .93 | 1.39 | 1.72 | 11.6 | – | ||
Variables, Study 2 | (1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) | (12) | (13) | (14) | (15) | (16) |
(1) AIA Anthropomorphism | ||||||||||||||||
(2) Trust | .24 | |||||||||||||||
(3) Identity Threat | .44 | −.07 | ||||||||||||||
(4) Privacy Concerns | .20 | −.22 | .49 | |||||||||||||
(5) Satisfaction | .13 | .45 | −.15 | −.21 | ||||||||||||
(6) Wellbeing | −.01 | .09 | −.14 | −.13 | .12 | |||||||||||
(7) Empowerment | .04 | .28 | −.19 | −.32 | .29 | .19 | ||||||||||
(8) Relationship Length | −.05 | −.02 | −.04 | −.04 | −.01 | −.07 | .08 | |||||||||
(9) Perceived Usefulness | −.10 | .36 | −.06 | −.12 | .43 | .12 | .28 | .02 | ||||||||
(10) Perceived Ease of Use | −.05 | .20 | −.19 | −.17 | .41 | .11 | .39 | .09 | .43 | |||||||
(11) Innovativeness | −.01 | .18 | −.07 | −.15 | .19 | .08 | .37 | .12 | .31 | .43 | ||||||
(12) Frequency of AIA Use | .08γ | .18 | −.06 | −.14 | .30 | .02 | .19 | .11 | .31 | .15 | .29 | |||||
(13) Technology Attitude | .01 | .39 | −.26 | −.29 | .32 | .13 | .37 | .02 | .42 | .35 | .43 | .21 | ||||
(14) Trust in Amazon | .7 | .45 | −.12 | −.35 | .38 | .11 | .32 | −.04 | .31 | .21 | .16 | .18 | .42 | |||
(15) Age | −.06 | −.05 | −.02 | −.03 | .02 | .01 | −.08 | .17 | −.12 | −.24 | −.07 | .07 | −.08 | .01 | ||
(16) Gender | −.12 | −.01 | −.07 | −.05 | −.05 | .10 | .12 | −.01 | .08 | .10 | .37 | .07 | .17 | −.01 | −.01 | |
M | 2.10 | 4.38 | 2.98 | 4.31 | 5.40 | 3.60 | 4.70 | 32.05 | 4.65 | 5.48 | 4.64 | 2.98 | 5.02 | 4.29 | 35.2 | – |
SD | 1.22 | 1.32 | 1.22 | 1.55 | 1.06 | 1.12 | 1.21 | 15.00 | 1.26 | 1.10 | 1.54 | 1.60 | 1.02 | 1.46 | 12.9 | – |
Model specification and results
Path | Full model | Model w/out controls | Full model, bootstrapped | 95% CIs | |
---|---|---|---|---|---|
Study 1 | |||||
Main links | Coef. (SE) | Coef. (SE) | Coef. (SE) | ||
AnthropomorphismH1(ANTHRO) | →Trust towards the AIA | .236 (.068)*** | .296 (.070)*** | .209 (.058)*** | [.094, .323] |
AnthropomorphismH2, H3, H4 | →Threat to human identity | .249 (.058)*** | .252 (.056)*** | .270 (.061)*** | [.150, .391] |
Trust towards the AIA | →Privacy Concerns | −.193 (.066)** | −.189 (.061)** | −.199 (.072)** | [−.340, −.058] |
Threat to human identityH2 | →Privacy Concerns | .387 (.076)*** | .394 (.075)*** | .325 (.067)*** | [.194, .457] |
Trust towards the AIAH1 | →Consumer Satisfaction | .190 (.049)*** | .329 (.053)*** | .219 (.070)** | [.082, .356] |
Threat to human identityH3 | →Consumer Satisfaction | −.051 (.058) | −.040 (.068) | −.048 (.062) | [−.170, .074] |
Trust towards the AIA | →Consumer Well-being | −.031 (.055) | −.007 (.051) | −.042 (.073) | [−.185, .101] |
Threat to human identityH4 | →Consumer Well-being | −.011 (.066) | −.008 (.065) | −.012 (.077) | [−.164, .139] |
Privacy Concerns | →Consumer Satisfaction | −.184 (.047)*** | −.147 (.055)** | −.206 (.053)*** | [−.310, −.102] |
Privacy Concerns | →Consumer Well-being | −.116 (.054)* | −.107 (.053)* | −.152 (.067)* | [−.285, −.020] |
Main effects of the moderators & Interaction effects | |||||
Relationship Closeness (RC) | →Threat to human identity | .086 (.048) | .077 (.048) | .110 (.063) | [−.014, .235] |
Relationship Length (RL) | →Threat to human identity | .064 (.073) | .096 (.072) | .055 (.058) | [−.059, .168] |
ANTHRO × RC | →Threat to human identity | −.025 (.038) | −.026 (.037) | −.041 (.062) | [−.163, .080] |
ANTHRO × RL | →Threat to human identity | −.088 (.063) | −.077 (.063) | −.087 (.056) | [−.197, .124] |
RC × RL | →Threat to human identity | .094 (.052) | .068 (.051) | .114 (.060) | [−.004, .231] |
ANTHRO × RC X RLH5 | →Threat to human identity | .087 (.041)* | .085 (.042)* | .130 (.059)* | [.015, .245] |
ANTHRO × RC | →Trust towards the AIA | .016 (.044) | −.005 (.047) | .021 (.072) | [−.120, .163] |
ANTHRO × RL | →Trust towards the AIA | .091 (.074) | .121 (.079) | .073 (.056) | [−.037, .183] |
RC × RL | →Trust towards the AIA | −.030 (.061) | −.029 (.064) | −.030 (.068) | [−.163, .103] |
ANTHRO × RC X RL | →Trust towards the AIA | −.065 (.049) | −.085 (.052) | −.080 (.061) | [−.200, .040] |
Full Model: R2 for Satisfaction: .444; R2 for Well-being: .031; R2 for Privacy Concerns: .173 R2 for Trust: .229; R2 for Threat: .155. | |||||
Model without controls: R2 for Satisfaction: .220; R2 for Well-being: .022; R2 for Privacy Concerns: .146 R2 for Trust: .094; R2 for Threat: .119. | |||||
Full Model, bootstrapped: R2 for Satisfaction: .445; R2 for Well-being: .031; R2 for Privacy Concerns: .173 R2 for Trust: .229; R2 for Threat: .155. | |||||
Study 2 | |||||
Main links | Coef. (SE) | Coef. (SE) | Coef. (SE) | ||
Anthropomorphism | →Trust towards the AIA | .181 (.038)*** | .259 (.042)*** | .169 (.033)*** | [.104, .234] |
Anthropomorphism | →Threat to human identity | .445 (.036)*** | .437 (.037)*** | .445 (.037)*** | [.373, .517] |
Trust towards the AIA | →Consumer AI Empowerment | .067 (.038) | .225 (.037)*** | .071 (.045) | [−.017, .160] |
Threat to human identityH6 | →Consumer AI Empowerment | −.195 (.073)** | −.304 (.080)*** | −.195 (.074)** | [−.341, −.049] |
Consumer AI Empowerment | →Privacy Concerns | −.216 (.049)*** | −.268 (.046)*** | −.170 (.041)*** | [−.250, −.090] |
Consumer AI EmpowermentH6 | →Consumer Well-being | .126 (.044)** | .142 (.040)*** | .137 (.048)** | [.043, .231] |
Consumer AI Empowerment | →Consumer Satisfaction | .030 (.033) | .135 (.034)*** | .035 (.038) | [−.040, .111] |
Privacy Concerns | →Consumer Well-being | −.016 (.036) | −.018 (.035) | −.022 (.055) | [−.129, .086] |
Privacy Concerns | →Consumer Satisfaction | −.001 (.027) | −.024 (.029) | −.001 (.039) | [−.077, .077] |
Trust towards the AIA | →Privacy Concerns | −.086 (.047) | −.183 (.043)*** | −.072 (.042) | [−.156, .011] |
Threat to human identity | →Privacy Concerns | .481 (.049)*** | .518 (.049)*** | .378 (.042)*** | [.292, .465] |
Trust towards the AIA | →Consumer Satisfaction | .197 (.031)*** | .295 (.031)*** | .244 (.039)*** | [.168, .319] |
Threat to human identity | →Consumer Satisfaction | −.072 (.035)* | −.106 (.038)** | −.083 (.038)* | [−.158, −.009] |
Trust towards the AIA | →Consumer Well-being | .008 (.041) | .028 (.037) | .009 (.052) | [−.093, .112] |
Threat to human identity | →Consumer Well-being | −.092 (.047)* | −.098 (.046)* | −.101 (.057) | [−.212, .011] |
Main effects of the moderators & Interaction effects | |||||
Relationship Length (RL) | →Consumer AI Empowerment | −.005 (.006) | −.002 (.006) | −.061 (.074) | [−.206, .084] |
Intervention 1: Awareness (INT1) | →Consumer AI Empowerment | −.031 (.116) | −.054 (.028) | −.011 (.041) | [−.092, .070] |
Intervention 2: Solution (INT2) | →Consumer AI Empowerment | .252 (.119)* | .216 (.131) | .089 (.042)* | [.007, .170] |
Intervention 3: Action (INT3) | →Consumer AI Empowerment | .163 (.118) | .130 (.131) | .057 (.041) | [−.024, .138] |
THREAT × RL | →Consumer AI Empowerment | −.015 (.005)** | −.014 (.006)* | −.234 (.095)* | [−.421, −.047] |
THREAT × INT1 | →Consumer AI Empowerment | .095 (.103) | .128 (.114) | .043 (.050) | [−.055, .140] |
THREAT × INT2 | →Consumer AI Empowerment | .130 (.097) | .159 (.106) | .069 (.056) | [−.041, .179] |
THREAT × INT3 | →Consumer AI Empowerment | .081 (.096) | .112 (.106) | .042 (.053) | [−.061, .146] |
INT1 × RL | →Consumer AI Empowerment | .011 (.008) | .008 (.009) | .066 (.051) | [−.033, .166] |
INT2 × RL | →Consumer AI Empowerment | .009 (.009) | .004 (.010) | .052 (.048) | [−.042, .146] |
INT3 × RL | →Consumer AI Empowerment | .013 (.008) | .016 (.008) | .091 (.048) | [−.004, .186] |
THREAT X RL X INT1H7 | →Consumer AI Empowerment | .017 (.007)* | .016 (.008)* | .118 (.054)* | [.011, .224] |
THREAT X RL X INT2H8 | →Consumer AI Empowerment | .013 (.007)† | .016 (.008)* | .104 (.061)† | [−.015, .223] |
THREAT X RL X INT3H9 | →Consumer AI Empowerment | .018 (.006)** | .018 (.007)** | .176 (.066)** | [.046, .305] |
Full Model: R2 for Satisfaction: .404; R2 for Well-being: .064; R2 for Privacy Concerns: .369 R2 for Trust: .313; R2 for Threat: .284. | |||||
Model without controls: R2 for Satisfaction: .232; R2 for Well-being: .051; R2 for Privacy Concerns: .306 R2 for Trust: .058; R2 for Threat: .191. | |||||
Full Model, bootstrapped: R2 for Satisfaction: .404; R2 for Well-being: .064; R2 for Privacy Concerns: .369 R2 for Trust: .313; R2 for Threat: .284. |
Discussion of Study 1
Study 2: A field experiment to test interventions that empower consumers
Procedure
Measures
Model specification and results
Discussion of Study 2
General discussion
Theoretical contributions
Managerial implications
Limitations
Directions for future research
Research Area/ Field | Key Result | Managerial Implications | Future Research Questions |
---|---|---|---|
Relationship Marketing | • User-AIA interactions can be characterized as an ongoing, dynamic relationship. The characteristics of this relationship play a crucial role in shaping our experience with AI. | • AIAs might have some untapped potential as a medium to develop customer relationships. | • How does the consumers’ relationship with AIAs develop over time? • How do consumers migrate from one stage of the relationship or the other? Are there critical events that shape the relationship trajectories? • How do consumer sentiments change over time and how can they inform other consumer metrics (e.g., purchase)? • How can AIAs be used effectively to establish and maintain strong customer relationships, as an intermediary between the customer and the firm? • How do the relationship dynamics start and evolve when there are multiple users of one single device? |
Consumer Empowerment & Privacy management | • Informing consumers about the data practices of AIAs, about ways to protect their data privacy and encouraging them to take action to protect their data privacy empowers them and subsequently improves their well-being. • Empowering the consumers in relation to the privacy of the personal data alleviates their privacy concerns. | • Actively informing and encouraging the consumers about protecting their data could improve the relationships and increase sales through the AIA • Managing consumer privacy effectively can be an opportunity to enhance the brand image and create a competitive advantage for the firms. | • How can we empower consumers, other than by decreasing the information asymmetry or by encouraging them to take action? • How can the information best be delivered to the consumers? •What can an empowered consumer mean for a company? • How will the public perception of a company change when it prioritizes the privacy of consumer data? • How can a company turn effective consumer privacy management into a competitive advantage? • Empowerment may involve a consumer experience that is less seamless, because consumers are more engaged in a process. What are potential trade-offs for a firm that consumer empowerment requires? |
Mind perception/ Anthropomorphism in AI | • Users perceive a mind in their AIAs, and this constitutes both a benefit and a cost. • The perception of a mind in the AIA constitutes a psychological cost for the user, manifested as a threat to human identity. | • Marketers should use anthropomorphism in technology prudently and consider potential, still unexplored consequences. | • How do different consumer segments respond differently to anthropomorphism in AIAs? • Are there individual characteristics that condition the costs and benefits of anthropomorphism in AI? • Do consumers perceive a mind in other AI-powered devices and form relationships with them? • What kind of implications does perceiving a rivaling mind in AI carry for the future? • How might the context of use influence the costs and benefits of anthropomorphism in AI? (e.g., use of AI in education vs. service contexts) • Which particular anthropomorphic features could lead to greater mind perceptions? (e.g., interactivity, learning, personality, embodiment, speech). |