Skip to main content
Top
Published in: Journal of Business Ethics 4/2022

04-02-2022 | Original Paper

Moral Judgments in the Age of Artificial Intelligence

Authors: Yulia W. Sullivan, Samuel Fosso Wamba

Published in: Journal of Business Ethics | Issue 4/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence (AI) system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency—attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience—attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI—mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., & Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30(1), 127–158.CrossRef Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., & Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30(1), 127–158.CrossRef
go back to reference Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40–46.CrossRef Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40–46.CrossRef
go back to reference Ames, D. L., & Fiske, S. T. (2013). Intentional harms are worse, even when they’re not. Psychological Science, 24(9), 1755–1762.CrossRef Ames, D. L., & Fiske, S. T. (2013). Intentional harms are worse, even when they’re not. Psychological Science, 24(9), 1755–1762.CrossRef
go back to reference Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189.CrossRef Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189.CrossRef
go back to reference Arkin, R. C., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. 2009 IEEE international symposium on computational intelligence in robotics and automation-(CIRA) (pp. 381–387). IEEE. Arkin, R. C., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. 2009 IEEE international symposium on computational intelligence in robotics and automation-(CIRA) (pp. 381–387). IEEE.
go back to reference Ashman, I., & Winstanley, D. (2007). For or against corporate identity? Personification and the problem of moral agency. Journal of Business Ethics, 76, 83–95.CrossRef Ashman, I., & Winstanley, D. (2007). For or against corporate identity? Personification and the problem of moral agency. Journal of Business Ethics, 76, 83–95.CrossRef
go back to reference Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483.CrossRef Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483.CrossRef
go back to reference Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. M. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38(2), 247–256.CrossRef Bastian, B., Loughnan, S., Haslam, N., & Radke, H. R. M. (2012). Don’t mind meat? The denial of mind to animals used for human consumption. Personality and Social Psychology Bulletin, 38(2), 247–256.CrossRef
go back to reference Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30, 195–218.CrossRef Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30, 195–218.CrossRef
go back to reference Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.CrossRef Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.CrossRef
go back to reference Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368.CrossRef Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368.CrossRef
go back to reference Brusoni, S., & Vaccaro, A. (2017). Ethics, technology and organization innovation. Journal of Business Ethics, 143, 223–226.CrossRef Brusoni, S., & Vaccaro, A. (2017). Ethics, technology and organization innovation. Journal of Business Ethics, 143, 223–226.CrossRef
go back to reference Capraro, V., & Sippel, J. (2017). Gender differences in moral judgment and the evaluation of gender-specified moral agents. Cognitive Processing, 18(4), 399–405.CrossRef Capraro, V., & Sippel, J. (2017). Gender differences in moral judgment and the evaluation of gender-specified moral agents. Cognitive Processing, 18(4), 399–405.CrossRef
go back to reference Cohen, S. (1988). Perceived stress in a probability sample of the United States. In S. Spacapan & S. Oskamp (Eds.), The Claremont symposium on applied social psychology (pp. 31–67). Sage. Cohen, S. (1988). Perceived stress in a probability sample of the United States. In S. Spacapan & S. Oskamp (Eds.), The Claremont symposium on applied social psychology (pp. 31–67). Sage.
go back to reference Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S. (2020). The strategic impacts of Intelligent Automation for knowledge and service work: An interdisciplinary review. The Journal of Strategic Information Systems, 29(4), 101600.CrossRef Coombs, C., Hislop, D., Taneva, S. K., & Barnard, S. (2020). The strategic impacts of Intelligent Automation for knowledge and service work: An interdisciplinary review. The Journal of Strategic Information Systems, 29(4), 101600.CrossRef
go back to reference Doyle, C. M., & Gray, K. (2020). How people perceive the minds of the dead: The importance of consciousness at the moment of death. Cognition, 202, 104308.CrossRef Doyle, C. M., & Gray, K. (2020). How people perceive the minds of the dead: The importance of consciousness at the moment of death. Cognition, 202, 104308.CrossRef
go back to reference Eisenberg, N., & Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101(1), 91–119.CrossRef Eisenberg, N., & Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101(1), 91–119.CrossRef
go back to reference Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.CrossRef Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.CrossRef
go back to reference Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRef Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRef
go back to reference Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436.CrossRef Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436.CrossRef
go back to reference Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029–1046.CrossRef Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029–1046.CrossRef
go back to reference Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, 619.CrossRef Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, 619.CrossRef
go back to reference Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences of the United States of America, 108(2), 477–479.CrossRef Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences of the United States of America, 108(2), 477–479.CrossRef
go back to reference Gray, K., & Wegner, D. M. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96(3), 505–520.CrossRef Gray, K., & Wegner, D. M. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96(3), 505–520.CrossRef
go back to reference Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125, 125–130.CrossRef Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125, 125–130.CrossRef
go back to reference Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.CrossRef Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.CrossRef
go back to reference Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108.CrossRef Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108.CrossRef
go back to reference Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2–3), 115–121.CrossRef Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2–3), 115–121.CrossRef
go back to reference Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25(3), 255–271.CrossRef Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25(3), 255–271.CrossRef
go back to reference Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814.CrossRef Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814.CrossRef
go back to reference Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 181–217). Cambridge, MA: MIT Press. Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 181–217). Cambridge, MA: MIT Press.
go back to reference Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628.CrossRef Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613–628.CrossRef
go back to reference Hao, K. (2019a). This is how AI bias really happens—And why it’s so hard to fix. MIT Technology Review. Hao, K. (2019a). This is how AI bias really happens—And why it’s so hard to fix. MIT Technology Review.
go back to reference Hao, K. (2019b). When algorithms mess up, the nearest human gets the blame. MIT Technology Review. Hao, K. (2019b). When algorithms mess up, the nearest human gets the blame. MIT Technology Review.
go back to reference Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRef Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRef
go back to reference Hollebeek, L. D., Sprott, D. E., & Brady, M. K. (2021). Rise of the machines? Customer engagement in automated service interactions. Journal of Service Research, 24(1), 3–8.CrossRef Hollebeek, L. D., Sprott, D. E., & Brady, M. K. (2021). Rise of the machines? Customer engagement in automated service interactions. Journal of Service Research, 24(1), 3–8.CrossRef
go back to reference Hume, D. (1751). An enquiry concerning the principles of morals. Clarendon Press.CrossRef Hume, D. (1751). An enquiry concerning the principles of morals. Clarendon Press.CrossRef
go back to reference Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.CrossRef Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.CrossRef
go back to reference Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301.CrossRef Johnson, D. G., & Verdicchio, M. (2018). Why robots should not be treated like animals. Ethics and Information Technology, 20(4), 291–301.CrossRef
go back to reference Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., & Severson, R. L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction (pp. 33–40). IEEE. Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., & Severson, R. L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction (pp. 33–40). IEEE.
go back to reference Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of dehumanization versus moral typecasting accounts. Cognition, 146(1), 33–47.CrossRef Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of dehumanization versus moral typecasting accounts. Cognition, 146(1), 33–47.CrossRef
go back to reference Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: Experimental studies. Phenomenology and the Cognitive Sciences, 7(1), 67–83.CrossRef Knobe, J., & Prinz, J. (2008). Intuitions about consciousness: Experimental studies. Phenomenology and the Cognitive Sciences, 7(1), 67–83.CrossRef
go back to reference Lagioia, F., & Sartor, G. (2020). Ai systems under criminal law: A legal analysis and a regulatory perspective. Philosophy & Technology, 33, 433–465.CrossRef Lagioia, F., & Sartor, G. (2020). Ai systems under criminal law: A legal analysis and a regulatory perspective. Philosophy & Technology, 33, 433–465.CrossRef
go back to reference Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human–computer Studies, 64(10), 962–973.CrossRef Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human–computer Studies, 64(10), 962–973.CrossRef
go back to reference Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262–273.CrossRef Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262–273.CrossRef
go back to reference Malle, B. F. (2019). How many dimensions of mind perception really are there? Proceedings of the 41st annual meeting of the cognitive science society (pp. 2268–2274). Cognitive Science Society. Malle, B. F. (2019). How many dimensions of mind perception really are there? Proceedings of the 41st annual meeting of the cognitive science society (pp. 2268–2274). Cognitive Science Society.
go back to reference Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318.CrossRef Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72, 293–318.CrossRef
go back to reference Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25, 147–186.CrossRef Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25, 147–186.CrossRef
go back to reference Malle, B. F., Magar, S. T., & Scheutz, M. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. Robotics and well-being (pp. 111–133). Springer.CrossRef Malle, B. F., Magar, S. T., & Scheutz, M. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. Robotics and well-being (pp. 111–133). Springer.CrossRef
go back to reference Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. in 2014 IEEE international symposium on ethics in science, technology and engineering, ETHICS. IEEE. Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. in 2014 IEEE international symposium on ethics in science, technology and engineering, ETHICS. IEEE.
go back to reference Merritt, T. R., Tan, K. B., Ong, C., Thomas, A., Chuah, T. L., & McGee, K. (2011, March). Are artificial team-mates scapegoats in computer games. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, pp. 685–688. Merritt, T. R., Tan, K. B., Ong, C., Thomas, A., Chuah, T. L., & McGee, K. (2011, March). Are artificial team-mates scapegoats in computer games. In: Proceedings of the ACM 2011 conference on computer supported cooperative work, pp. 685–688.
go back to reference Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215.CrossRef Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215.CrossRef
go back to reference Mou, X. (2019). Artificial Intelligence: Investment trends and selected industry uses. IFC EMCompass Emerging Markets, 71, 1–8. Mou, X. (2019). Artificial Intelligence: Investment trends and selected industry uses. IFC EMCompass Emerging Markets, 71, 1–8.
go back to reference Omohundro, S. M. (2008). The basic AI drives. In: Artificial General Intelligence, pp. 483–492. Omohundro, S. M. (2008). The basic AI drives. In: Artificial General Intelligence, pp. 483–492.
go back to reference Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735.CrossRef Orr, W., & Davis, J. L. (2020). Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735.CrossRef
go back to reference Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(3), 879–891.CrossRef Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(3), 879–891.CrossRef
go back to reference Pyszczynski, T., Greenberg, J., & Solomon, S. (1997). Why do we need what we need? A terror management perspective on the roots of human social motivation. Psychological Inquiry, 8(1), 1–20.CrossRef Pyszczynski, T., Greenberg, J., & Solomon, S. (1997). Why do we need what we need? A terror management perspective on the roots of human social motivation. Psychological Inquiry, 8(1), 1–20.CrossRef
go back to reference Rai, A., Constantinides, P., & Sarker, S. (2019). Next-generation digital platforms: Toward human–AI hybrids. MIS Quarterly, 43(1), iii–ix. Rai, A., Constantinides, P., & Sarker, S. (2019). Next-generation digital platforms: Toward human–AI hybrids. MIS Quarterly, 43(1), iii–ix.
go back to reference Rai, T. S., & Diermeier, D. (2015). Corporations are cyborgs: Organizations elicit anger but not sympathy when they can think but cannot feel. Organizational Behavior and Human Decision Processes, 126, 18–26.CrossRef Rai, T. S., & Diermeier, D. (2015). Corporations are cyborgs: Organizations elicit anger but not sympathy when they can think but cannot feel. Organizational Behavior and Human Decision Processes, 126, 18–26.CrossRef
go back to reference Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
go back to reference Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology., 29(2), 354–400. Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology., 29(2), 354–400.
go back to reference Schraube, E. (2009). Technology as materialized action and its ambivalences. Theory & Psychology, 19(2), 296–312.CrossRef Schraube, E. (2009). Technology as materialized action and its ambivalences. Theory & Psychology, 19(2), 296–312.CrossRef
go back to reference Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020). Collaborating with technology-based autonomous agents: Issues and research opportunities. Internet Research, 30, 1–18.CrossRef Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020). Collaborating with technology-based autonomous agents: Issues and research opportunities. Internet Research, 30, 1–18.CrossRef
go back to reference Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648–663.CrossRef Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648–663.CrossRef
go back to reference Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.CrossRef Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.CrossRef
go back to reference Tang, S., & Gray, K. (2018). CEOs imbue organizations with feelings, increasing punishment satisfaction and apology effectiveness. Journal of Experimental Social Psychology, 79, 115–125.CrossRef Tang, S., & Gray, K. (2018). CEOs imbue organizations with feelings, increasing punishment satisfaction and apology effectiveness. Journal of Experimental Social Psychology, 79, 115–125.CrossRef
go back to reference Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & Society, 22(4), 495–521.CrossRef Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & Society, 22(4), 495–521.CrossRef
go back to reference van der Woerdt, S., & Haselager, P. (2019). When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas in Psychology, 54, 93–100.CrossRef van der Woerdt, S., & Haselager, P. (2019). When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas in Psychology, 54, 93–100.CrossRef
go back to reference Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. robot agents. 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 775–780). IEEE.CrossRef Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. robot agents. 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 775–780). IEEE.CrossRef
go back to reference Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 3(01), 177–192.CrossRef Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 3(01), 177–192.CrossRef
go back to reference Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445.CrossRef Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445.CrossRef
go back to reference Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14, 383–388.CrossRef Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14, 383–388.CrossRef
go back to reference Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.CrossRef Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.CrossRef
go back to reference Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: The MIT Press.CrossRef Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: The MIT Press.CrossRef
go back to reference Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin Random House. Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin Random House.
go back to reference Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer—And forgive—Service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557–1572.CrossRef Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer—And forgive—Service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557–1572.CrossRef
Metadata
Title
Moral Judgments in the Age of Artificial Intelligence
Authors
Yulia W. Sullivan
Samuel Fosso Wamba
Publication date
04-02-2022
Publisher
Springer Netherlands
Published in
Journal of Business Ethics / Issue 4/2022
Print ISSN: 0167-4544
Electronic ISSN: 1573-0697
DOI
https://doi.org/10.1007/s10551-022-05053-w

Other articles of this Issue 4/2022

Journal of Business Ethics 4/2022 Go to the issue