Abstract
This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and their unsophisticated social capabilities prevent any attribution of rights to robots, which are devoid of intrinsic moral dignity and personal status. On the other hand, we argue that another form of moral consideration—not based on rights attribution—can and must be granted to robots. The reason is that relationships with robots offer to the human agents important opportunities to cultivate both vices and virtues, like social interaction with other human beings. Our argument appeals to social recognition to explain why social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons. This recognition dynamic justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation of persistent affective dispositions and behavioral habits that are capable of corrupting the human user’s character. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining the main conceptual conundrums that our approach has to face.
Similar content being viewed by others
References
Aristotle (1988). Ethica Nichomachea. In I. Baywater (Ed.), Oxford University Press.
Athanassoulis, N. (2000). A response to Harman: virtue ethics and character traits. Proceedings of the Aristotelian Society, 100, 215–221.
Ball, A., Silvera-Tawil, D., Rye, D., & Velonaki, M. (2014). Group comfortability when a robot approaches. In M. Beetz, B. Johnston, & M.-A. Williams (Eds.), Social robotics (Vol. 8755, pp. 44–53). Cham: Springer.
Brown, A. (2015). To mourn a robotic dog is to be truly human. The Guardian. Thu 12 Mar 2015. Published online: http://www.theguardian.com/commentisfree/2015/mar/12/mourn-robotic-dog-human-sony. Retrieved 29-5-2018.
Brownstein, M., & Madva, A. (2012). Ethical automaticity. Philosophy of the Social Sciences, 42(1), 68–98. https://doi.org/10.1177/0048393111426402.
Bryson, J. J. (2010a). Robots should be slaves. In Y. Wilks (Ed.), Natural language processing (Vol. 8, pp. 63–74). Amsterdam: John Benjamins. https://doi.org/10.1075/nlp.8.11bry.
Bryson, J. J. (2010b). Why robot nannies probably won’t do much psychological damage. Interaction Studies, 11(2), 196–200. https://doi.org/10.1075/is.11.2.03bry.
Carr, L. (2018). On what grounds might we have moral obligations to robots? Retrieved from: https://www2.rivier.edu/faculty/lcarr/OUR%20MORAL%20OBLIGATION%20TO%20ROBOTS.pdf. Accessed 25 May 2018.
Coeckelbergh, M. (2010a). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5.
Coeckelbergh, M. (2010b). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241. https://doi.org/10.1007/s10676-010-9221-y.
Coeckelbergh, M. (2018). Why care about robots? Empathy, moral standing, and the language of suffering. Kairos. Journal of Philosophy & Science, 20(1), 141–158.
Collins, S. (2004). Moral virtue and the limits of the political community in Aristotle’s Nichomachean Ethics. American Journal of Political Science, 48(1), 47–61.
Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.00468.
Darling, K. (2012). Extending legal rights to social robots. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2044797.
Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3–4), 177–190. https://doi.org/10.1016/S0921-8890(02)00374-3.
Dumouchel, P., & Damiano, L. (2017). Living with robots. (M. B. DeBevoise, Trans.). Cambridge: Harvard University Press.
Fessler, L. (2017). We tested bots like Siri and Alexa to see who would stand up to sexual harassment. Quartz. Retrieved from: https://qz.com/911681/we-tested-apples-siri-amazon-echos-alexa- microsofts-cortana-and-googles-google-home-to-see-which-personal- assistant-bots-stand-up-for-themselves-in-the-face-of-sexual- harassment/. Accessed 25 May 2018.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gerdes, A. (2016). The issue of moral consideration in robot ethics. ACM SIGCAS Computers and Society, 45(3), 274–279. https://doi.org/10.1145/2874239.2874278.
Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral Turing test. Journal of Information, Communication and Ethics in Society, 13(2), 98–109. https://doi.org/10.1108/JICES-09-2014-0038.
Gini, G., Albiero, P., Benelli, B., & Altoè, G. (2007). Does empathy predict adolescents’ bullying and defending behavior? Aggressive Behavior, 33(5), 467–476. https://doi.org/10.1002/ab.20204.
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124. https://doi.org/10.1080/1047840X.2012.651387.
Gunkel, D. J. (2017). The other question: can and should robots have rights? Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9442-4.
Harman, G. (1999). Moral philosophy meets social psychology. Proceedings of the Aristotelian Society, 99, 315–331.
Harman, G. (2000). The nonexistence of character traits. Proceedings of the Aristotelian Society, 100(223–2), 26.
Hauskeller, M. (2014). Sexbots on the rise. In M. Hauskeller (Ed.), Sex and the posthuman condition (pp. 11–23). London: Palgrave Macmillan. https://doi.org/10.1057/9781137393500_2.
Hauskeller, M. (2016). Automatic sweethearts. In Mythologies of transhumanism (pp. 181–199). Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-319-39741-2_10.
Honneth, A. (1995). The struggle for recognition: the moral grammar of social conflicts. (J. Anderson, Trans.). Cambridge: MIT Press.
Hursthouse, R. (1999). On virtue ethics. Oxford: Oxford University Press.
Ikäheimo, H., & Laitinen, A. (2007). Analyzing recognition: Identification, acknowledgement and recognitive attitudes towards persons. In B. van den Brink & D. Owen (Eds.), Recognition and power (pp. 33–56). Cambridge: Cambridge University Press.
Ikäheimo, H., & Laitinen, A. (Eds.). (2011). Recognition and social ontology. Leiden: Brill.
Küster, D., & Świderska, A. (2016). Moral patients: what drives the perceptions of moral actions towards humans and robots? In J. Seibt, M. Nørskov, & S. Schack Andersen (Eds.), Frontiers in artificial intelligence and applications (Vol. 290, pp. 340–343). https://doi.org/10.3233/978-1-61499-708-5-340.
Laitinen, A. (2002). Interpersonal recognition: A response to value or a precondition of personhood? Inquiry, 45(4), 463–478.
Laitinen, A. (2016a). Should robots be electronic persons or slaves? Retrieved from: https://www.finsif.fi/should-robots-be-electronic- persons-or-slaves/. Accessed 25 May 2018.
Laitinen, A. (2016b). Robots and human sociality: Normative expectations, the need for recognition, and the social bases of self- esteem. In J. Seibt, M. Nørskov, & S. Schack Andersen (Eds.), Frontiers in artificial intelligence and applications (vol. 290, pp. 313–322). https://doi.org/10.3233/978-1-61499-708-5-313.
Laitinen, A., Niemelä, M., & Pirhonen, J. (2016). Social robotics, elderly care, and human dignity: A recognition-theoretical approach. In Frontiers in artificial intelligence and applications (pp. 155–163). https://doi.org/10.3233/978-1-61499-708-5-155.
Levy, D. (2007). Love and sex with robots: The evolution of human–robot relationships. New York: Harper-Perennial.
Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209–216. https://doi.org/10.1007/s12369-009-0022-6.
Merritt, M. (2000). Virtue ethics and situationist personality psychology. Ethical Theory and Moral Practice, 3(4), 365–383. https://doi.org/10.1023/A:1009926720584.
Murphy, R. R. (2018a). Westworld and the uncanny valley. Science Robotics, 3(17), eaat8447. https://doi.org/10.1126/scirobotics.aat8447.
Murphy, R. R. (2018b). Parents, rejoice: Alexa will now remind kids to say “please”. Quartz, 25 April 2018.
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., & Yamada, S. (2016). Why do children abuse robots? Interaction Studies, 17(3), 347–369. https://doi.org/10.1075/is.17.3.02nom.
Nørskov, M. (2014). Human-robot interaction and human self-realization: reflections on the epistemology of discrimination. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Frontiers in artificial intelligence and applications (Vol. 273, pp. 319–327). https://doi.org/10.3233/978-1-61499-480-0-319.
Nørskov, M. (Ed.). (2016a). Social robots: boundaries, potential, challenges. London: Routledge.
Nørskov, M. (2016b). Technological dangers and the potential of human–robot interaction: a philosophical investigation of fundamental epistemological mechanisms of discrimination. In M. Nørskov (Ed.), Social robots: boundaries, potential, challenges (pp. 99–122). London: Routledge.
Oakley, J., & Cocking, D. (2008). Virtue ethics and professional roles. Cambridge: Cambridge University Press.
Peeters, A., & Haselager, P. (in preparation). Designing virtuous sex robots.
Rietveld, E. (2008). Situated normativity: the normative aspect of embodied cognition in unreflective action. Mind, 117(468), 973–1001. https://doi.org/10.1093/mind/fzn050.
Rodogno, R. (2016). Robots and the limits of morality. In M. Nørskov (Ed.), Social robots: boundaries, potential, challenges (pp. 39–56). London: Routledge.
Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 205–222). Cambridge: MIT Press.
Seibt, J., Hakli, R., & Nørskov, M. (Eds.) (2014). Sociable robots and the future of social relations. Frontiers in Artificial Intelligence and Applications, 273.
Sparrow, R. (2012). Can machines be people? Reflections on the Turing triage test. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 301–315). Cambridge: MIT Press.
Sparrow, R. (2016). Kicking a robot dog. In 2016 11th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 229–229).
Sparrow, R. (2017). Robots, rape, and representation. International Journal of Social Robotics, 9(4), 465–477. https://doi.org/10.1007/s12369-017-0413-z.
Suzuki, Y., Galli, L., Ikeda, A., Itakura, S., & Kitazaki, M. (2015). Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports, 5, 15924. https://doi.org/10.1038/srep15924.
Tavani, H. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 73. https://doi.org/10.3390/info9040073.
Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & SOCIETY, 22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8.
Truong, A. (2016). Parents are worried the Amazon Echo is conditioning their kids to be rude. Quartz, 9 June 2016.
Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character. Philosophy & Technology, 28(1), 107–124.
Vallor, S. (2016). Technology and the virtues: a philosophical guide to a future worth wanting. Oxford: Oxford University Press.
Waytz, N., Epley, A., Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. In Psychological Review Copyright 2007 by the American Psychological Association, 2007 (Vol. 114, No. 4, pp. 864–886).
Whitby, B. (2008). Sometimes it’s hard to be a robot: a call for action on the ethics of abusing artificial agents. Interacting with Computers, 20(3), 326–333. https://doi.org/10.1016/j.intcom.2008.02.002.
Whitby, B. (2010). Oversold, unregulated, and unethical: why we need to respond to robot nannies. Interaction Studies, 11(2), 290–294. https://doi.org/10.1075/is.11.2.18whi.
Whitby, B. (2012). Do you want a robot lover? The ethics of caring technologies. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 233–248). Cambridge: MIT Press.
You, S., Nie, J., Suh, K., & Sundar, S. S. (2011). When the robot criticizes you...: self-serving bias in human-robot interaction. In Proceedings of the 6th international conference on human-robot interaction (pp. 295–296). https://doi.org/10.1145/1957656.1957778.
Zagzebski, L. (2010). Exemplarist virtue theory. Metaphilosophy, 41(1–2), 41–57. https://doi.org/10.1111/j.1467-9973.2009.01627.x.
Zwolinski, M., & Schmidtz, D. (2013). Environmental virtue ethics. In D. C. Russell (Ed.), The Cambridge companion to virtue ethics (pp. 221–239). Cambridge: Cambridge University Press.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cappuccio, M.L., Peeters, A. & McDonald, W. Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition. Philos. Technol. 33, 9–31 (2020). https://doi.org/10.1007/s13347-019-0341-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13347-019-0341-y