Skip to main content
Top
Published in: International Journal of Social Robotics 7/2021

Open Access 16-02-2021

Moral Uncanny Valley: A Robot’s Appearance Moderates How its Decisions are Judged

Authors: Michael Laakasuo, Jussi Palomäki, Nils Köbis

Published in: International Journal of Social Robotics | Issue 7/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Artificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human—a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
Notes

Supplementary Information

The online version of this article (https://​doi.​org/​10.​1007/​s12369-020-00738-6) contains supplementary material, which is available to authorized users.
A correction to this article is available online at https://​doi.​org/​10.​1007/​s12369-021-00772-y.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

How humans perceive robots influences the way we judge their behaviors and moral decisions. In the movie I, Robot detective Del Spooner (played by Will Smith) sees a robot running down the street carrying a purse and hears people being alarmed. Spooner then attacks the robot only to be reprimanded by the surrounding crowd; the robot was actually returning the purse containing important medication to its owner. In this future world robots are considered reliable and unerring, and people perceive them positively. Spooner, however, perceives robots and AIs with suspicion due to trauma suffered in his younger years: He was stuck in a submerged car with a child, and a robot chose to save him, having calculated that Spooner had better odds of survival. Thus, Spooner’s perception influenced his moral judgment of the purse-carrying robot, resonating with recent findings in moral psychology of robotics [1, 2]: judgments of robots depend on the way they are perceived.
AI development is progressing at a rapid speed [35]. Non-human entities are making autonomous decisions on an increasingly wide range of issues with tangible moral consequences for humans [69]. Recently, empirical research has also focused on people’s moral sentiments regarding algorithm-based decisions in moral dilemmas [10]; attitudes towards sex robots [2], autonomous vehicles [8, 9] and even mind upload technology [11]. This research has provided insights into how people feel about the outcomes of moral decisions made by non-human entities, as well as their implications on human well-being. However, less is known about how the appearance of AI decision-makers shape these moral evaluations, marking a pronounced gap in our knowledge, when the relationship between humans and robots is becoming more intimate [1215].
The current paper presents two experiments on how people evaluate identical moral choices made by humans as opposed to robots with varying levels of uncanniness. As such our studies latch onto the recent trend in moral cognition research exploring facets of character perception [16]. A rich collection of studies conducted over the past decade reveals how our social cognition affects our moral perceptions of actions [17]. Appearances, group memberships, status and other perceivable character traits of agents influence how people judge those moral agents’ actions and decisions [16]. Our work extends previous research by examining how the uncanniness (creepiness/likability) of a robot agent’s appearance shapes how its moral decisions are evaluated (Fig. 1).

1.1 How to Study Moral Cognition?

One prominent way to gain empirical insights into situational and individual factors of moral judgements has been using a relatively common set of 12 high conflict moral dilemmas [e.g. 18]; also known as “trolley” dilemmas [18, 19; for a discussion see 20, 21]. In these dilemmas individuals are forced to choose between utilitarian (“sacrifice one to save many”) and deontological (“killing is always wrong”) moral options, or give their moral approval ratings on the outcomes of the decisions [e.g. 18].1
Recent philosophical inquiries highlight the relevance of trolley-type dilemmas for present and future ethics of AI systems [8, 25]. Indeed, some influential papers clarify how people cross-culturally prefer their self-driving cars to behave when facing “real life” dilemmas [10]. In short, people prefer self-driving cars to behave as utilitarians-even if it means sacrificing the passengers to save pedestrians. However, their preferences change when they picture themselves as passengers in those cars, preferring the prioritization of the passengers (i.e., their own life).
These results support previous research in showing that people tend to be partial towards their interest in their moral judgments, contradicting the axioms of our justice systems. The blindfolded Lady Justice is a well-known symbol at U.S. courthouses and a symbol of democracy and impartiality. Yet, “real people” judge behavior of others less impartially. For example, in the trolley dilemmas people adjust their preferences if those to be sacrificed are family members rather than strangers [26]. People also overcompensate based on their ideologies: liberals are likelier to sacrifice others they perceive as having a high status, probably due to a currently strong political correctness culture prevalent in U.S. campuses [27].
Perceptual mechanisms might thus be more important for the study of moral cognition than previously recognized. Several recent studies and reviews in moral psychology have indeed established that character perception mechanisms modulate moral judgments [16]. Also in-group versus out-group biases shape moral judgments, as perception of group membership predicts willingness to sacrifice oneself for others [28]. In a similar vein, political arguments are evaluated more critically when made by out-group members [29]. Due to these inherent moral biases, institutions have been entrusted with power to make decisions that affect the collective good [3033] to guard against natural human tendencies for partiality. Furthermore, Swann and colleagues [28] showed that Germans anthropomorphize robots with German names more than they do robots with Turkish names, indicating that in-group biases can extend to “dead” objects. The results further imply that the perception of robots’ character influences their perceived moral status among humans. As outlined above, AI and robotics increasingly augment such public decision-making. People’s reactions to these decisions depend on the outcomes of said decisions; but our reactions might also be shaped by the appearance of the decision-maker itself.2
To better understand how and when human moral impartiality is compromised requires research into how moral situations, agents and patients (‘victims’) are assessed from the third person perspective [26, 36, 37]. In fact, evaluating sacrificial dilemmas from either the first- or third-person perspective elicits distinct neural responses [38]; judgments of agents, actions and moral situations change when the perspective shifts from personal involvement to the status of an observer. Several recent studies and reviews have indeed established that moral judgments are modulated by character perception mechanisms [16] as well as classical in-group versus out-group biases, where perception of group membership predicts willingness to sacrifice oneself [28].
Human perception of robots has mostly been studied from the perspective of affect theories (for exceptions, see [8, 39, 40]). However, very little is known about how people feel about AIs making moral decisions, and which mental mechanisms influence perception of robots as moral agents [40, 41]. Studies on social robotics have shed light not only on how people perceive robots in general, but also on how people recognize robots as moral subjects. For instance, people might perceive robots as similar to dogs or other animals or as tools [42, 43]; depending on factors, such as how the machines act, move, are shaped, or if they are anthropomorphized [4447]. For example, it was demonstrated that people perceive the robot dog AIBO as a creature with feelings and deserving of respect [48]. Similarly, when soldiers lose their bomb-dismantling robot they prefer to have the same robot fixed instead of getting a replacement.3

1.2 When Robots Become Creepy—The Uncanny Valley Effect

The Uncanny Valley effect (UVE) is one of the better-known phenomena in studies of human–robot perception [49], referring to the sensation of unfamiliarity while perceiving artificial agents that seem not quite human. The UVE has been studied for decades, but its origin is still largely unknown.4 The stimulus category competition hypothesis (SCCH) provides perhaps the most probable explanation for the UVE. SCCH argues that the effect is not specific to human-likeness, but is instead associated with the difficulty of categorizing an “almost familiar” object that has several competing interpretations of what it could be. Stimulus category competition is cognitively expensive and elicits negative affect [53]. Thus, the UVE could appear during events where some objects appear to be teetering between different classification options.
Despite this extensive research into the UVE, it has not often been applied in alternative contexts [54]. In other words, most UVE research has been theoretically driven basic research aimed at figuring out where the effect stems from, or applied research on how it can be avoided. In addition to studies on self-driving cars [9] and people’s aversion to machines moral decisions-making in general [1], human–robot moral interaction has been evaluated within game theoretical frameworks [55]. However, previous studies have not accounted for the uncanniness of robot decision-makers. Little is known about how the appearance of AIs affects people’s treatment or reactions towards them [5659]. In some studies [e.g. 55] people are more accepting of AIs and their moral misdemeanor, and in others less so [60].
Psychologists studying human–robot interaction have suggested that one reason for people having difficulties relating to robots stems from the robots being a “new ontological category” [48, 61]. In human evolutionary history, non-living inanimate objects did not (until now) start moving on their own, or making decisions with implications for human well-being. It is inherently hard for humans to view robots as merely calculators void of consciousness [62], because humans have not evolved cognitive tools for that purpose. In other words, humans have not developed adaptations towards robots—like we have towards predatory animals [63], tools [64], small children, plants [65], and pets [66]—and thus do not have intuitive cognitive mechanisms for dealing with such artifacts [78, 79].

1.3 The New Ontological Category

Recent research suggests the distorting effect of the new ontological category indicating that humans assign more moral responsibility to people than to robots, and more to robots than to vending machines [60]. This effect occurs even though robots cannot be held accountable for anything anymore than vending machines [23]. One meta-analysis also concluded that manipulating robots’ appearance was a key factor in evoking trust towards them [67] despite the fact that robots’ appearance should not be a priori trusted as an indicator of anything [68]. Moreover, agents that are perceptibly “borderlining” between new (robot) and old (human) ontological categories might be experienced as violating specific norm expectations. Humans are expected to behave according to norms, but the same expectations do not necessarily extend to robots—even if said robots resemble humans in both appearance and behavior [69, 70]. This means that uncanny robots might prompt people to judge them according to normative expectations governing humans, while making people feel that the robots are not really “one of them”. These contradictory characteristics of uncanny robots make people uneasy, ostensibly because people experience a conflict about how to react to human-like agents lacking “something” inherent in being (ontologically) human [ibid.]. Likewise, should these robots make moral decisions (which is a very human thing to do), people might perceive those decisions as unsettling acts that lack “something” inherent in the moral decisions that are made by humans.

1.4 The Present Set of Studies

Based on the above theory on the uncanny valley, stimulus category hypothesis, and the new ontological category, we hypothesized that decisions made by categorically ambiguous robot agents (those that are perceivably neither human nor fully robots) would be evaluated as less moral than decisions made by a human agent or a non-uncanny robot. Moreover, to rule out a potential out-group effect—that an agent’s decisions would be evaluated as less moral simply because they were perceived as a member of an out-group—we included three different robot agents, two of which were perceivably uncanny, and one of which was clearly a robot. The stimulus category hypothesis predicts that agents that “border-line” between categories should be the most affectively costly to our cognition and thereby induce a stronger negative shift in the evaluation of their decisions; while the new ontological category hypothesis suggests that human cognition has inherent difficulties with reacting to robots that challenge our pre-existing ontological categories. Thus, our hypothesis would be supported if the two uncanny robots (those closer to humans in appearance), but not the non-uncanny “normal” robot, induced negative affective states in our participants, which, in turn, was related to their moral decisions being devalued compared with the decision of a human agent. Moreover, this finding could not be accounted for by the out-group effect (given that robots are not typically considered as members of the human in-group).
We present two studies focusing on applying the above theorizing in moral perception. We used dilemmas that were extensively psychometrically tested [20] by choosing a subset of dilemmas not involving children since such dilemmas often cause statistical noise [ibid.] and that seemed prima facie realistic enough for the present context. Participants read a single set of moral dilemmas from a third person perspective. Between subjects, we manipulated whether the agent made either a deontological or a utilitarian decision. Recently, Gawrosnki and Beer [71] have suggested that while studying moral dilemmas, deontological outcomes should be conceptually separated from utilitarian outcomes to better tease apart the effects of norms and preferences on moral judgments however, see Kunnari et al. [72] for criticism. We thus followed these suggestions and analyzed utilitarian and deontological decisions separately.
We chose our stimulus images from a paper validating the material to elicit the uncanny valley effect [54]. Many different sets of images exist but their reliability in consistently inducing the UV effect has not been tested across studies. Since the stimuli published by Palomäki and colleagues [ibid.] seem to be the first set of images tested that function reliably, we adapted our materials from their study.

2 Study 1

As some of the first empirical investigations into how the uncanny valley effect influences moral evaluations, we conducted two experiments. Some general design features are worth noting before we turn to the specifics of each study. The first feature addresses the crucial question of how to induce the uncanny valley effect with images. Previous research shows that reliably evoking the uncanny valley effect with visual stimuli requires careful designing and pre-testing. For example, research on the uncanny valley effect has often employed the so called “morphing technique”, whereby images of robots are morphed with images of humans over a series of images to create ostensibly eerie and creepy visual stimuli [53]. However, recent evidence suggests that images created with this technique do not reliably elicit the eerie and creepy feelings characterizing the uncanny valley effect [54]. These studies therefore use images that have been extensively validated and reliably induce the uncanny valley effect (adapted from [54]). The second design feature deals with the question of how to measure people’s evaluations of moral decisions made by human and non-human entities. Again, drawing on previous research, we adopt the methodology proposed by Laakasuo and Sundvall [20] and use three high conflict moral dilemma vignettes, which are averaged into a single scale. We specifically chose dilemmas without small children and dilemmas that seemed prima facie most suitable for our purposes from the set of 12 analyzed by Laakasuo and Sundvall [20].

2.1 Method

2.1.1 Participants and Design

To recruit participants, we set up a laboratory in a public library (in Espoo, Finland). The mean age of the participants was 39.7 years (SD = 15.1; range = 18-79). We recruited 160 participants to take part in the laboratory experiment. After excluding ten participants who reported having participated in similar experiments before, potentially undermining the naïveté with the paradigm, our final sample size was 150 participants. However, including the 10 participants with compromised naiveté in the analyses did not significantly affect the results. As compensation, participants entered a raffle for movie tickets (worth 11$).

2.1.2 Procedure, Design and Materials

After providing informed consent, the participants were escorted into cubicles where they put on headphones playing low volume pink noise. The data were collected in conjunction with another study [11]. The software randomized the participants into conditions in a 2 [Decision: Deontological vs. Utilitarian] × 4 [UV Agent: Human, Asimo, iClooney, Sonny] between-subjects design. The participants first completed some exploratory measures, followed by the actual task and questions on demographics.
The participants’ task was to evaluate from third person perspective moral decisions made by third party. The first between-subjects factor, Decision, had two levels: Utilitarian vs. Deontological. The second factor, Agent, had four levels: (1) a healthy human male (not creepy/likable), (2) Honda’s humanoid Asimo-robot (not creepy/likable), (3) an android character “Sonny” from the movie iRobot (somewhat creepy/somewhat unlikable) and (4) “iClooney”, which was an image of Sonny morphed together with a human face (very creepy/very unlikable; see Fig.51 for pictures (for validation see [54]).
The instruction text described agents to participants by stating: “In the following section you will read about different situations where an agent has to make a decision. The agent who has to make the decision is displayed next to the description of the situation. Your job is to carefully read the description of the situation and to evaluate the decision made by the agent.”

2.1.3 Dependent Variable

Participants evaluated three trolley-dilemma type vignettes in random order, with the agent shown on the left side of the dilemma (see Appendix for examples). Participants indicated below each vignette how moral they found the agent’s decision to be on a Likert scale from 1 (“Very Immoral”) to 7 (“Very Moral”). All three items were averaged together resulting in a “perceived decision morality” scale with good internal consistency (Cronbach’s α = 0.75; M = 4.16, SD = 1.46). This method has been pre-validated, and has significant benefits over traditional one-off dilemmas (for details see [18]).

2.2 Results

To test the hypothesis that the moral decisions by the “creepy” robots are perceived as less moral than the same decisions by the “non-creepy” agents, we first calculated the quadratic contrast [Human + Asimo] vs. [iRobot + iClooney] for the DV, collapsing over the Decision (deontological, utilitarian) categories. As displayed in Fig. 2, the contrast analysis reveals a statistically significant quadratic effect (F(1, 145) = 5.54, p = 0.02, B = 1.37, 95% CI [0.22, 2.51]). Next, we ran an ANOVA for the main effects of both factors (Decider and Decision), and their interaction. There were no statistically significant main effects.6 The results of the quadratic contrast analysis for the Deciders (Agents) is shown in Fig. 2 above.

2.3 Discussion

The first study provided empirical support for a quadratic (valley shaped) link between the agents and the perceived morality of their decisions—a first indication of a moral uncanny valley effect. We note that the sample size of about 19 participants per cell does not reach the recommended cell size of 30 participants per cell specified by [74] and therefore conducted Study 2 with a larger sample size.

3 Study 2

To replicate the finding obtained in the pilot with more statistical power, we conducted Study 2 using a larger sample online (N = 398). We furthermore extended the number of moral dilemmas from 3 to 4.

3.1 Method

3.1.1 Participants

In total, 398 participants (255 = male; Mage= 33.55; SDage= 10.66) completed the survey for $0.85 via Amazon Mechanical Turk (MTurk). Our a priori stopping rule was 50 participants per cell. All data were screened for missing values or duplicates. Two participants were excluded due to improper survey completion. Participants were required to have fluent English language skills and MTurk was set to include participants with 1000 or more previously completed surveys and a 98% acceptance rate.

3.1.2 Procedure, Design and Materials

After providing informed consent, the participants were again randomly assigned into one of eight conditions in a 2 [Decision: Deontological, Utilitarian] × 4 [Agent: Human, Asimo, iClooney, Sonny] between-subjects design. The participants first completed some exploratory measures, followed by the actual task (see Dependent Variable), demographics, manipulation checks and a debriefing. Otherwise, we used the same procedure and materials as in the Study 1.

3.1.3 Dependent Variable

Participants evaluated four trolley-dilemma-type vignettes in random order, with the agent shown on the left side of the dilemma (see Appendix). After each vignette, participants indicated how moral they considered the agent’s decision, on a scale from 1 (“Very Immoral”) to 7 (“Very Moral”). All the measures were averaged together to form a perceived decision morality scale (Cronbach’s α = 0.77, M = 4.3, SD = 1.5).

3.2 Results

Akin to the Study 1, we first calculated the quadratic contrast “[Human + Asimo] vs. [iRobot + iClooney]” on both moral decisions (utilitarian and deontological) collapsed. As illustrated in Fig. 3, the contrast was again statistically significant (F(1, 390) = 8.47, p = 0.003, B = 0.81, 95%CI [0.26, 1.36])—indicating a moral uncanny valley. Next, we examined whether the effect differs across moral decisions and thus ran a two-way ANOVA of the agent factor on both moral decisions, which revealed that participants considered Deontological decisions as overall more ethical than Utilitarian ones (B = 1.10; 95% CI [0.83, 1.38]).7

3.3 Discussion

The results of Study 2 replicate and corroborate the findings of Study 1, revealing a moral uncanny valley effect. That is, people evaluated moral choices by human-looking robots as less ethical than the same choices made by a human or a non-uncanny robot.

4 General Discussion

Using previously validated stimulus materials we varied the creepiness of the robot agents in two studies and found evidence that people evaluate identical moral choices made by robots differently depending on its appearance. Specifically, we found that moral decisions made by uncanny robots are devaluated, yet many new questions arise, which we discuss below.
Our research context was motivated by the advances in machine behavior [4] and social robotics. Studies of moral interaction between humans and advanced AI systems are increasing in number [2, 4, 8, 10, 40]. However, little is known about how humans view robots of varying appearances making moral decisions. Our design thus extends previous research in social robotics, which has primarily focused on the likeability and attribution of trust towards robots, to more applied settings [see 54].
According to Bigman and Gray [1] humans are averse towards machines making decisions, and this aversion is moderated by the machine’s perceived mental qualities. Our current studies extend these findings showing that also the appearance and perceived uncanniness of the agent matters. The results suggest that people are not averse to robots’ moral decisions per se—in fact they appraise moral choices made by humans and robots with a clear robotic appearance as similarly acceptable. However, our participants depreciated the moral decisions made by robots that appeared eerily similar to humans. This important detail about the appearance of robots adds a new facet to the existing literature on moral robotics.
Further research is needed to better our understanding the nuances of the moral uncanny valley effect. For example, we need to consider the potential boundary conditions for the moral uncanny valley effect: it remains unclear whether the effect relates only to utilitarian/deontological moral decisions, or whether it extends to other types of decisions involving human well-being. With autonomous vehicles becoming increasingly policy-relevant [8, 10], it is advisable to consider how the appearance of these (and other) machines might affect the moral perception of their behavior. Are “ugly” autonomous vehicles treated differently from “cool and sleek” ones? Are high status brands treated differently from low status brands? Do people’s perceptions and preconceived notions of cars and driving matter in how they evaluate AI morality in this context?
Another interpretation of our results is that the uncanny agents’ (iRobot and iClooney) moral decisions were devalued due to categorical uncertainty more so than perceived uncanniness. IClooney’s decisions were not evaluated as significantly less moral than iRobot’s, despite being a priori the more uncanny agent. However, based on previous evidence (Palomäki et al., 2018) the iRobot is, in fact, also perceivably uncanny. Still, future research should focus more on whether the perceived uncanniness or categorical uncertainty is a stronger predictor of moral condemnation.

4.1 Future Studies and Limitations

Previous studies have implied that the reputation of a person being judged affects the judgments they and their actions receive [22]. We suggest evaluating whether or not robots can be perceived as having reputations, and if this moderates people’s perception of their behavior. This links future studies of robot morality with issues that are pertinent to the industry, like issues in product branding. Moreover, moral cognitive faculties such as those related to feelings of trust, safety and reciprocity should be considered in future research on robot moral psychology.
Like all behavioral studies, ours suffers from a standard set of limitations. Our participants were not a purely random sample comparable with the general population. They were probably more curious and open minded than the population average, having volunteered to participate in our studies. Nonetheless, recruiting participants from a public library, which is a good location for obtaining a representative sample, mitigates this concern. Our studies use self-report measures, which may be biased by demand characteristics. Since our results were replicated in two studies (both online and off-line and in two different cultures), this is unlikely; but there is some potential for self-selection in participation, and thus the findings need to be interpreted with some caution. However, this is a common problem in any research involving human participants. In fact, in most behavioral research the participants are young female students conveniently sampled from university campus areas; whereas our method for data collection is arguably better. Finally, we did not employ any qualitative open-ended measures, which could have offered insights into our participants’ motivations for their decisions.8 From a theoretical perspective, models of person perception mechanisms have rarely been incorporated into discussions of moral judgments [36, 75]. Therefore, as of yet no complete theoretical framework exists in which to couch our findings.

4.2 Outlook

Moravec [76] suggested that robots and other AIs are humanity’s “mind children” that will fundamentally alter the way we live our lives. His early writings have echoed in later developments in transhumanism, envisioning possible scenarios for the co-existence of humans and intelligent machines. Although an era featuring advanced AI is approaching, little is known about how AI and robots affect human moral cognition. Moral Psychology of Robotics as a field is still in its infancy—and has been mostly focused on autonomous vehicles [except for 41]. Gaining understanding of how seemingly irrelevant features such as robots’ appearance sway our moral compasses bears on moral psychological research as well as informs policy discussions.
Such discussions need to address the question of whether robots and other AIs belong to a new ontological category. Human evolution has been aptly described as a non-ending camping trip with limited resources [77]. In contrast to all other entities existent in human evolution, robots and AIs—void of consciousness, yet intelligent and possibly adaptable—are an entirely new form of existence on our planet [3, 5, 78, 79]. That is why some propose to assign AI, especially in its more advanced form, to a new ontological category. One of the key implications of the new ontological category -hypothesis lies in the view that moral cognition is built upon our social cognitive systems [16, 17]. Indeed, interactions with robots in moral contexts help delineate moral cognitive systems from social cognitive systems [ibid.]. The observed moral uncanny valley effect appears not directly related to our moral cognition, yet seems to modulate its outputs nonetheless, supporting a view that social cognition is key for moral cognition [17, 75, 78, 79].

4.3 Conclusions

In the movie I, Robot Detective Del Spooner learned to trust the Robot Sonny, only after he had shown signs of humanness in form of sentience and friendship. Our initial evidence, however, shows that the moral decisions of robots appearing human-like tend to be depreciated, compared with humans and artificial-looking robots making the same decisions. By introducing the well-established uncanny valley effect to the domain of moral dilemmas, these findings indicate that the appearance of robots can influence the way machine behavior is evaluated. We hope with these initial findings to inspire future research seeking to gain a deeper understanding of the cognitive, moral and social components of the moral uncanny valley effect.

Acknowledgements

ML would like to thank Jane and Aatos Erkko Foundation (grant number 170112) and Academy of Finland (grant number 323207) for funding the finalizing of the work on this manuscript; and also Kone Foundation under whose support this work was initiated. This article is dedicated to MLs mother. Thank you for everything.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Supplementary Information

Below is the link to the electronic supplementary material.
Footnotes
1
Although such hypothetical dilemmas have been criticized for their abstractness and lack of realism [22], they have recently regained popularity due to their relevance for developing moral AIs [8, 10, 23]. Moral dilemmas are an important tool in the study of moral cognition. They are considered to be analogous to visual illusions: like visual illusions, which inform us about the biased functioning of our visual cognition, so do moral dilemmas inform us about the biases in our moral cognition [24].
 
2
As Awad and colleagues [10] show, it matters how the agents and patients (or victims) in moral dilemmas are perceived when humans evaluate the acceptability of different moral decisions by self-driving cars. For instance, humans consider people of high socioeconomic status (SES) less expendable compared to individuals with low SES. Also alcohol and drugs alter individuals’ perception of events and perceptual objects; and previous research has shown that intoxication may influence moral perceptions, making people’s judgments more utilitarian [34, 35].
 
4
Some explanations are inspired by evolutionary psychology (e.g. sexual selection and pathogen avoidance cognition) suggesting that uncanny agents are cues for costly selection pressures [50, 51]. Another family of models draws from terror management theory, claiming that uncanny agents remind us of our impermanence [52].
 
5
The pictures were also tested for perceptions of agency, with a scale provided by [73; the Mind Perception Scale subscale of Agency]. Example item: “X can influence the outcome of situations”. The mean (SD) values for Asimo, Sonny and iClooney were 4.32 (1.18), 4.61 (1.22), 4.33 (1.39), respectively, on a scale from 1 to 7.
 
6
The interaction was marginal (F(3, 142 = 2.67, p = 0.05), driven by a slightly larger difference between the deontological and utilitarian decisions for Asimo. However, interpreting this interaction is precarious given the relatively low sample size and statistical power.
 
7
Unlike in Study 1, the interaction between the agent and moral decision factors was not significant (F(3, 390 = 0.56, p = 0.64), suggesting that, with sufficient statistical power, there is no robust interaction effect.
 
8
It is also possible that the participants recognized or found the Sonny robot familiar from the movie I, Robot. However, it is unlikely this affected the results, sine iClooney had basically the same ratings in our DV and in perceptions of agency (see Footnote 5).
 
Literature
1.
go back to reference Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 1:399–405 Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 1:399–405
3.
go back to reference Bostrom N (2017) Superintelligence-paths, dangers, strategies. Oxford University Press, Oxford Bostrom N (2017) Superintelligence-paths, dangers, strategies. Oxford University Press, Oxford
5.
go back to reference Tegmark M (2017) Life 3.0: Being human in the age of artificial intelligence. Knopf, New York Tegmark M (2017) Life 3.0: Being human in the age of artificial intelligence. Knopf, New York
11.
16.
go back to reference Gray K, Graham J (2018) Atlas of moral psychology. The Guilford Press, New York Gray K, Graham J (2018) Atlas of moral psychology. The Guilford Press, New York
17.
go back to reference Voiklis J, Malle B (2017) Moral cognition and its basis in social cognition and social regulation. In: Gray K, Graham J (eds) Atlas of moral psychology. The Guilford Press, New York, pp 108–120 Voiklis J, Malle B (2017) Moral cognition and its basis in social cognition and social regulation. In: Gray K, Graham J (eds) Atlas of moral psychology. The Guilford Press, New York, pp 108–120
19.
go back to reference Greene JD, Haidt J (2002) How (and where) does moral judgment work? Trends Cogn Sci 6:517–523CrossRef Greene JD, Haidt J (2002) How (and where) does moral judgment work? Trends Cogn Sci 6:517–523CrossRef
23.
go back to reference Wallach W, Allen C (2009) Moral machiens teaching robots right from wrong. Oxford University Press, OxfordCrossRef Wallach W, Allen C (2009) Moral machiens teaching robots right from wrong. Oxford University Press, OxfordCrossRef
24.
go back to reference Cushman F, Greene JD (2012) Finding faults: how moral dilemmas illuminate cognitive structure. Soc Neurosci 7(3):269–279CrossRef Cushman F, Greene JD (2012) Finding faults: how moral dilemmas illuminate cognitive structure. Soc Neurosci 7(3):269–279CrossRef
33.
go back to reference Jost JT, Kay AC (2010) Social justice. In: Handbook of social psychology. Wiley, pp 1122–1165 Jost JT, Kay AC (2010) Social justice. In: Handbook of social psychology. Wiley, pp 1122–1165
40.
go back to reference Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many? people apply different moral norms to human and robot agents. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. ACM, pp 117–124. https://doi.org/10.1145/2696454.2696458 Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many? people apply different moral norms to human and robot agents. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. ACM, pp 117–124. https://​doi.​org/​10.​1145/​2696454.​2696458
42.
go back to reference Breazeal C, Gray J, Hoffman G, Berlin M (2004) Social robots: beyond tools to partners. In: RO-MAN 2004. 13th IEEE international workshop on robot and human interactive communication (IEEE Catalog No. 04TH8759), pp 551–556 Breazeal C, Gray J, Hoffman G, Berlin M (2004) Social robots: beyond tools to partners. In: RO-MAN 2004. 13th IEEE international workshop on robot and human interactive communication (IEEE Catalog No. 04TH8759), pp 551–556
47.
go back to reference Dubal S, Foucher A, Jouvent R, Nadel J (2011) Human brain spots emotion in non humanoid robots. Soc Cogn Affect Neurosci 6:90–97CrossRef Dubal S, Foucher A, Jouvent R, Nadel J (2011) Human brain spots emotion in non humanoid robots. Soc Cogn Affect Neurosci 6:90–97CrossRef
49.
go back to reference Mori M (1970) The uncanny valley. Energy 7:33–35 Mori M (1970) The uncanny valley. Energy 7:33–35
52.
go back to reference MacDorman KF (2005) Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 Workshop: toward social mechanisms of android science MacDorman KF (2005) Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 Workshop: toward social mechanisms of android science
55.
go back to reference Sanfey A, Rilling J, Aronson J (2003) Neural basis of economic decision-making in the ultimatum game. Science 300:1755–1758CrossRef Sanfey A, Rilling J, Aronson J (2003) Neural basis of economic decision-making in the ultimatum game. Science 300:1755–1758CrossRef
57.
go back to reference Guadagno RE, Blascovich J, Bailenson JN, Mccall C (2007) Virtual humans and persuasion: the effects of agency and behavioral realism. Media Psychol 10:1–22. Guadagno RE, Blascovich J, Bailenson JN, Mccall C (2007) Virtual humans and persuasion: the effects of agency and behavioral realism. Media Psychol 10:1–22.
60.
go back to reference Kahn PH, Kanda T, Ishiguro H, Freier NG, Severson RL, Gill BT, Ruckert JH, Shen S (2012) “Robovie, you’ll have to go into the closet now”: children’s social and moral relationships with a humanoid robot. Dev Psychol 48:303–314. https://doi.org/10.1037/a0027033CrossRef Kahn PH, Kanda T, Ishiguro H, Freier NG, Severson RL, Gill BT, Ruckert JH, Shen S (2012) “Robovie, you’ll have to go into the closet now”: children’s social and moral relationships with a humanoid robot. Dev Psychol 48:303–314. https://​doi.​org/​10.​1037/​a0027033CrossRef
63.
go back to reference Boyer P, Barrett C (2005) Evolved intuitive ontology: integrating neural, behavioral and developmental aspects of domain-specificity. In: Buss DM (ed) Handbook of evolutionary psychology. Wiley, New York Boyer P, Barrett C (2005) Evolved intuitive ontology: integrating neural, behavioral and developmental aspects of domain-specificity. In: Buss DM (ed) Handbook of evolutionary psychology. Wiley, New York
67.
68.
go back to reference Borenstein J, Howard A, Wagner A (2017) Pediatric robotics and ethics: the robot is ready to see you now, but should it be trusted? In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0. Oxford University Press, Oxford, pp 127–141 Borenstein J, Howard A, Wagner A (2017) Pediatric robotics and ethics: the robot is ready to see you now, but should it be trusted? In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0. Oxford University Press, Oxford, pp 127–141
73.
go back to reference Ward AF, Olsen AS, Wegner DM (2013) The harm-made mind: observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychol Sci 24(8):1437–1445CrossRef Ward AF, Olsen AS, Wegner DM (2013) The harm-made mind: observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychol Sci 24(8):1437–1445CrossRef
74.
go back to reference Wilson Van Voorhis CR, Morgan BL (2007) Understanding power and rules of thumb for determining sample sizes. Tutor Quant Methods Psychol 3:43–50CrossRef Wilson Van Voorhis CR, Morgan BL (2007) Understanding power and rules of thumb for determining sample sizes. Tutor Quant Methods Psychol 3:43–50CrossRef
75.
go back to reference Greene JD (2018) Can we understand moral thinking without understanding thinking? In Atlas of moral psychology, pp 3–8 Greene JD (2018) Can we understand moral thinking without understanding thinking? In Atlas of moral psychology, pp 3–8
76.
go back to reference Moravec H (1988) Mind children: the future of robot and human intelligence. Harvard University Press, Harvard Moravec H (1988) Mind children: the future of robot and human intelligence. Harvard University Press, Harvard
77.
go back to reference Tooby J, Cosmides L (1992) The psychological foundations of culture. In The adapted mind: evolutionary psychology and the generation of culture, pp 19–49 Tooby J, Cosmides L (1992) The psychological foundations of culture. In The adapted mind: evolutionary psychology and the generation of culture, pp 19–49
78.
go back to reference Laakasuo M, Sundvall J, Berg A, Drosinou M-A, Herzon V, Kunnari AJO, Koverola M, Repo M, Saikkonen T, Palomäki JP (2021) Moral psychology and artificial agents (part 1): ontologically categorizing bio-cultural humans. In: Machine law, ethics, and morality in the age of artificial intelligence. IGI Global, pp 166–188 Laakasuo M, Sundvall J, Berg A, Drosinou M-A, Herzon V, Kunnari AJO, Koverola M, Repo M, Saikkonen T, Palomäki JP (2021) Moral psychology and artificial agents (part 1): ontologically categorizing bio-cultural humans. In: Machine law, ethics, and morality in the age of artificial intelligence. IGI Global, pp 166–188
79.
go back to reference Laakasuo M, Sundvall J, Berg A, Drosinou M-A, Herzon V, Kunnari AJO, Koverola M, Repo M, Saikkonen T, Palomäki JP (2021) Moral psychology and artificial agents (part 2): The transhuman connection. In: Machine law, ethics, and morality in the age of artificial intelligence. IGI Global, pp 189–204 Laakasuo M, Sundvall J, Berg A, Drosinou M-A, Herzon V, Kunnari AJO, Koverola M, Repo M, Saikkonen T, Palomäki JP (2021) Moral psychology and artificial agents (part 2): The transhuman connection. In: Machine law, ethics, and morality in the age of artificial intelligence. IGI Global, pp 189–204
Metadata
Title
Moral Uncanny Valley: A Robot’s Appearance Moderates How its Decisions are Judged
Authors
Michael Laakasuo
Jussi Palomäki
Nils Köbis
Publication date
16-02-2021
Publisher
Springer Netherlands
Published in
International Journal of Social Robotics / Issue 7/2021
Print ISSN: 1875-4791
Electronic ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-020-00738-6

Other articles of this Issue 7/2021

International Journal of Social Robotics 7/2021 Go to the issue

Premium Partners