Skip to main content
Erschienen in: Ethics and Information Technology 2/2020

04.01.2020 | Original Paper

Robot Betrayal: a guide to the ethics of robotic deception

verfasst von: John Danaher

Erschienen in: Ethics and Information Technology | Ausgabe 2/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception—superficial state deception—is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
The philosophical obsession with consciously intended deception has been criticized by others. Robert Trivers, in his natural history of deception, points out that “If by deception we mean only consciously propagated deception—outright lies—then we miss the much larger category of unconscious deception, including active self-deception” (Trivers 2011, p. 3). This seems right and is the more appropriate approach to take when looking at robotic deception.
 
2
This paper is not the first to defend this idea, though the theory has not always been explicitly named as it is in the main text. For similar arguments, see Neely (2014), Schwitzgebel and Garza (2015), and Danaher (2019a, b). Each of these authors suggests, either directly or indirectly, that the superficial signals of a robot should be taken seriously from an ethical perspective, at least under certain conditions.
 
3
Ethical behaviourism is also consistent with, but distinct from, the science of machine behaviour that Rahwan et al. (2019) advocate. Rahwan et al.’s article make a plea for scientists to study how machines behave and interact human beings using the tools of behavioural science. In making this plea, they highlight a tendency to focus too much on the engineering details (how the robot/AI was designed and programmed) in the current literature. The result of this is that an important aspect of how machines work and the behavioural patterns they exhibit is being overlooked. I fully support their programme and the stance advocated in this article agrees with them insofar as (a) the behavioural perspective on robots does seem to be overlooked or downplayed in the current debate and (b) there is a significance to machine behaviour that is independent from the mechanical and computational details of their operation. Nevertheless, despite my sympathy for their programme, I would emphasize that ethical behaviourism is not intended to be part of a science of machine behaviour. It is a claim about the kinds of evidence we can use to warrant our ethical attitudes toward machines.
 
4
Easily overridden by other moral considerations, that is. They might still legally amount to a breach of contract.
 
5
He also might not. He doesn’t discuss the issue at all.
 
Literatur
Zurück zum Zitat Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
Zurück zum Zitat Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology,9, 468.CrossRef Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology,9, 468.CrossRef
Zurück zum Zitat Danaher, J. (2019a). The philosophical case for robot friendship. The Journal of Posthuman Studies,3(1), 5–24.CrossRef Danaher, J. (2019a). The philosophical case for robot friendship. The Journal of Posthuman Studies,3(1), 5–24.CrossRef
Zurück zum Zitat Elder, A. (2015). False friends and false coinage: A tool for navigating the ethics of sociable robots. SIGCAS Computers and Society,45(3), 248–254.CrossRef Elder, A. (2015). False friends and false coinage: A tool for navigating the ethics of sociable robots. SIGCAS Computers and Society,45(3), 248–254.CrossRef
Zurück zum Zitat Elder, A. (2017). Robot friends for autistic children: Monopoly money or counterfeit currency? In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0: From autonomous cars to artificial intelligence. Oxford: OUP. Elder, A. (2017). Robot friends for autistic children: Monopoly money or counterfeit currency? In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0: From autonomous cars to artificial intelligence. Oxford: OUP.
Zurück zum Zitat Grice, P. H. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Speech acts (pp. 41–58). New York: Academic Press.CrossRef Grice, P. H. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Speech acts (pp. 41–58). New York: Academic Press.CrossRef
Zurück zum Zitat Häggström, O. (2019). Challenges to the Omohundro-Bostrom framework for AI motivations. Foresight,21(1), 153–166.CrossRef Häggström, O. (2019). Challenges to the Omohundro-Bostrom framework for AI motivations. Foresight,21(1), 153–166.CrossRef
Zurück zum Zitat Isaac, A. M. C., & Bridewell, W. (2017). White lies and silver tongues: Why robots need to deceive (and how). In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford: Oxford University Press. Isaac, A. M. C., & Bridewell, W. (2017). White lies and silver tongues: Why robots need to deceive (and how). In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford: Oxford University Press.
Zurück zum Zitat Kaminsky, M., Ruben, M., Smart, W., & Grimm, C. (2017). Averting robot eyes. Maryland Law Review,76, 983. Kaminsky, M., Ruben, M., Smart, W., & Grimm, C. (2017). Averting robot eyes. Maryland Law Review,76, 983.
Zurück zum Zitat Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., and Cusimano, C. (2015) Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 117–124). Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., and Cusimano, C. (2015) Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 117–124).
Zurück zum Zitat Margalit, A. (2017). On betrayal. Cambridge: Harvard University Press.CrossRef Margalit, A. (2017). On betrayal. Cambridge: Harvard University Press.CrossRef
Zurück zum Zitat Omohundro, S. (2008). The basic AI drives. In P. Wang, B. Goertzel, S. Franklin (Eds.), Proceedings of the First AGI Conference Artificial General Intelligence 2008 (pp. 483–492). Amsterdam: IOS. Omohundro, S. (2008). The basic AI drives. In P. Wang, B. Goertzel, S. Franklin (Eds.), Proceedings of the First AGI Conference Artificial General Intelligence 2008 (pp. 483–492). Amsterdam: IOS.
Zurück zum Zitat Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Jean-Francois, B., et al. (2019). Machine behaviour. Nature,568, 477–486.CrossRef Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Jean-Francois, B., et al. (2019). Machine behaviour. Nature,568, 477–486.CrossRef
Zurück zum Zitat Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology,14(1), 27–40.CrossRef Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology,14(1), 27–40.CrossRef
Zurück zum Zitat Simler, K., & Hanson, R. (2018). The elephant in the brain. Oxford: Oxford University Press. Simler, K., & Hanson, R. (2018). The elephant in the brain. Oxford: Oxford University Press.
Zurück zum Zitat Trivers, R. (2011). The folly of fools. New York: Basic Books. Trivers, R. (2011). The folly of fools. New York: Basic Books.
Zurück zum Zitat Turkle, S. (2007). Authenticity in the age of digital companions. Interaction Studies,8, 501–507.CrossRef Turkle, S. (2007). Authenticity in the age of digital companions. Interaction Studies,8, 501–507.CrossRef
Zurück zum Zitat Turkle, S. (2010). In Good Company. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins Publishing. Turkle, S. (2010). In Good Company. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins Publishing.
Zurück zum Zitat Voiklis, J., Kim, B., Cusimano, C., and Malle, B.F. (2016). Moral judgments of human vs. robot agents. In 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 775–780). IEEE. Voiklis, J., Kim, B., Cusimano, C., and Malle, B.F. (2016). Moral judgments of human vs. robot agents. In 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 775–780). IEEE.
Zurück zum Zitat Wagner, A., & Arkin, R. (2011). Acting deceptively: Providing robots with the capacity for deception. International Journal of Social Robotics,3(1), 5–26.CrossRef Wagner, A., & Arkin, R. (2011). Acting deceptively: Providing robots with the capacity for deception. International Journal of Social Robotics,3(1), 5–26.CrossRef
Metadaten
Titel
Robot Betrayal: a guide to the ethics of robotic deception
verfasst von
John Danaher
Publikationsdatum
04.01.2020
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 2/2020
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-019-09520-3

Weitere Artikel der Ausgabe 2/2020

Ethics and Information Technology 2/2020 Zur Ausgabe

Premium Partner