Skip to main content
Erschienen in: Ethics and Information Technology 3/2020

23.02.2020 | Original Paper

Autonomous weapons systems and the moral equality of combatants

verfasst von: Michael Skerker, Duncan Purves, Ryan Jenkins

Erschienen in: Ethics and Information Technology | Ausgabe 3/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
Docherty (2012). Docherty raises a number of additional objections to the deployment of AWS including some of the following ones on this list. Also see Kahn (2017).
 
2
Burri (2017).
 
3
Armin Krishnan (2009), Guarini and Bello (2012), and Sharkey (2007, p. 122).
 
4
Sparrow (2007) and Roff (2013).
 
5
Sparrow (2016a).
 
6
Sparrow cf. Robillard (2017) and Burri (2017).
 
7
Jenkins and Purves (2016, pp. 1–10)
 
8
We do not mean to suggest that a theory or objection must posit the existence of constraints to count as deontological. See Scheffler (1984) for one exception. Generally speaking, however, deontological moral theories posit constraints.
 
9
McNaughton and Rawling (1998).
 
10
This is admittedly an idiosyncratic way of characterizing the distinction between consequentialism and deontology and their relationship to Just War Theory, since some have argued that the principles of discrimination and proportionality are grounded in non-consequentialist moral constraints (e.g., Nagel 1972). Still, for the purposes of our discussion, it is helpful to characterize discrimination and proportionality as valuable goals to be achieved in wartime rather than as constraints on the achievement of our goals.
 
11
Guarini and Bello (2012) and Schmitt and Turner (2012). See also Asaro (2012).
 
12
For an excellent extended discussion of these challenges, see Sparrow (2015, 2016a).
 
13
See Schmitt (2012), Guarini and Bello (2012) and Sparrow (2016a).
 
14
See Bostrom (2014) for a description of some of the paths to general AI that is far superior to human general intelligence.
 
15
It is crucial to his argument, however, that robot agency falls short of full-blooded moral agency. The origins of this ‘responsibility gaps’ objection to AWS can be found in Matthias (2004). See also Roff (2013) for an extended discussion of responsibility and liability for AWS.
 
16
This summarized version of Sparrow’s argument is taken from Purves et al. (2015, pp. 853–854).
 
17
Purves et al. (2015, p. 854).
 
18
Sparrow (2016a, p. 106).
 
19
Sparrow (2016b, p. 402).
 
20
Sparrow (2016a, p. 106).
 
21
Ibid: 107.
 
22
Jenkins and Purves (2016, pp. 391–400).
 
23
Sparrow’s derives support for his view from survey data (2016b, p. 402) about negative public feeling about AWS. While we do not wish to dismiss such data as irrelevant to the morality of deploying AWS, we believed it must be critically assessed before it is judged decisive evidence for the view that AWS are mala in se.
 
24
See, for example, Augustine’s letter 189 to Boniface, §6. See also, Purves et al. (2015, p. 864): “Augustine believes there are moral requirements for soldiers themselves to act for the right reasons. Aquinas, for his part, quotes Augustine approvingly in his Summa Theologica (1920). According to Reichberg, Aquinas is principally concerned with ‘the inner dispositions that should guide our conduct in war’ (2010, p. 264)” (2015).
 
25
Purves et al. (2015) develop this objection. They cite Davidson (1964, 1978) as a proponent of the desire-belief model. They cite Darwall (1983), Gibbard (1990), Quinn (1993), and Korsgaard (1996), among others as proponents of the taking as a reason model.
 
26
See Bostrom (2014) and Purves et al. (2015).
 
27
See Skerker (2016).
 
28
MEC also covers privileged irregular combatants like insurgents who wear uniforms, carry their arms in the open, and obey the norms and laws of war.
 
29
One of the authors advances that full-blown defense and critique in Skerker (2016).
 
30
One of the authors has spent a career working with the US military. In his experience, service personnel overwhelmingly accept MEC for conventional adversaries.
 
31
Seumas Miller (2010, pp. 57, 68, 77, 80).
 
32
Miller (2010, Chap. 2), Rawls (1999, pp. 351–354), and Waldron (1993, pp. 3–30, 26, 28) derive the duty differently. (Cf. Nozick 1974, pp. 102, 110). Support ought to be withdrawn if the institutions become significantly corrupt and fail to meet the goals for which they were established over a long period of time. Yet the anarchic risks of non-compliance with institutional rules are significant so support for institutions generally oriented to morally valuable goods is indicated even in the event that the institution pursues a particular unjust project. For example, tax-payers should not cease paying tax to support public schools because of a dubious curriculum implemented one year. Citizens should not overthrow their governments because of a bad foreign policy decision.
 
33
Waldron (1993, pp. 8–10).
 
34
Basically just states largely respect the basic rights of their inhabitants and equitably enforce the law; they need not be democratic.
 
35
Cf. Hurka (2007, p. 210).
 
36
To violate a right is to wrongfully infringe that right.
 
37
As robots do not experience impulses of the relevant sort, we will ignore condition (6) for the rest of this essay. We also cannot answer whether an AWS can choose to omit the pursuit of its own rights in order to defer to others’ interests without answering the key question of this essay. That question is whether AWS have liberty-rights to kill enemy service members. We will therefore put element (10) aside as well.
 
38
In order to raise questions about AWS, Sparrow invokes Thomas Nagel's insistence that the moral basis of military violence has to be an interpersonal relationship between subjects, (Sparrow 2016a). Since an AWS is incapable of an interpersonal relationship, it cannot engage in permissible killing. Sparrow may be missing the force of Nagel's argument, which is focused on the recipient of military violence rather than the agent (1972). The recipient of violence is treated with respect when he is targeted for something he chose to do, like becoming a combatant, as opposed to something that has nothing to do with his subjectivity, like his ethnic affiliation or his presence in a certain area. The use of indiscriminate weapons is disrespectful because such weapons do not distinguish between people based on their status or activities. Writing in 1972, Nagel certainly must be assuming that a human agent is engaged in discriminate targeting, but it seems that a sophisticated AWS could engage in that kind of distinction, only targeting armed personnel or military materiel instead of targeting all people in an area.
 
39
One can also enjoy rights by virtue of natural properties, not strictly through having rights ceded to one. So babies might have rights despite not being eligible to be moral contract partners.
 
40
Full disclosure, one author of this paper does not sharing this asymmetrical aversion to death by robot compared with death by terrorist.
 
41
Some utilitarians would argue that agents have duties despite the non-existence of rights. Since most contemporary just war theorists operate in a deontological idiom, we will confine our discussion to that broad moral framework.
 
42
Psychopaths, famously, have trouble either distinguishing moral rules from non-moral ones (Borg and Sinnott-Armstrong 2013), or else feeling the force of the moral rules they do recognize (Cima et al. 2010). See these, among other entries in the debate over the moral psychology of psychopathy.
 
43
Discussions of the ethics of AWS sometimes invoke vexed terms like “the value of human life,” “moral weight,” “moral gravity,” and so on. We want to be careful about the use of such evocative but imprecise terms lest arguments reduce to “I know it when I see it”-style appeals that beg questions against proponents of AWS.
 
44
Nozick (1974, p. 33).
 
45
Peter Asaro argues that the application of morally rich laws cannot be automated because such laws are designed to be interpreted by people. For example, the right to due process is essentially a right to "question the rules and the appropriateness of their application in a given circumstance, and to make an appeal to informed human rationality and understanding" (2012, p. 700).
 
46
This argument also addresses Michael Robillard’s argument that the deontological concerns about AWS are misplaced because an AWS does not make genuine decisions of its own but merely acts on conditional orders programmed into it ahead of time by human beings. On Robillard’s view, the designers of the AWS are therefore responsible for the deaths directly incurred by the AWS (2017, pp. 6, 7). A complicated technical and philosophical discussion would be required to address whether a given level of complex machine learning could result in an entity able to make genuinely autonomous decisions distinct from one of the conditional prompts programmed by the AWS engineers. To our point though, combatants do not enter into reciprocal moral relationships with weapon designers but rather with the agents who decide to use those weapon systems. If weapon designers did cede their claim-rights against being targeted by the combatants their weapon systems threaten, then, counter-intuitively, an elderly, retired engineer could be permissibly targeted in war-time by combatants threatened by aircraft the engineer contributed to the design of 30 years prior.
 
47
This argument is consistent with Nagel (1974, p. 136).
 
48
Granted, an experienced service member may be able to kill later in his or her career without much emotion.
 
49
“Agent regret” is a term introduced by Bernard Williams (1993), referring to the emotion one feels following one’s non-culpable causation of harm.
 
50
We thank an anonymous referee for the Journal of Applied Philosophy for pressing us on this point.
 
Literatur
Zurück zum Zitat Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of the lethal decision-making. International Review of the Red Cross, 94(886), 687–709.CrossRef Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of the lethal decision-making. International Review of the Red Cross, 94(886), 687–709.CrossRef
Zurück zum Zitat Borg, J. S., & Sinnott-Armstrong, W. (2013). Do psychopaths make moral judgments? Handbook on psychopathy and law (pp. 107–128). Oxford: Oxford University Press. Borg, J. S., & Sinnott-Armstrong, W. (2013). Do psychopaths make moral judgments? Handbook on psychopathy and law (pp. 107–128). Oxford: Oxford University Press.
Zurück zum Zitat Bostrom, N. (2014). Superintelligence: Paths, dangers. Strategies: Oxford University Press, Oxford. Bostrom, N. (2014). Superintelligence: Paths, dangers. Strategies: Oxford University Press, Oxford.
Zurück zum Zitat Burri, S. (2017). What is the moral problem with killer robots? In B. J. Strawser, R. Jenkins, & M. Robillard (Eds.), Who should die? (pp. 163–185). Oxford: Oxford University Press. Burri, S. (2017). What is the moral problem with killer robots? In B. J. Strawser, R. Jenkins, & M. Robillard (Eds.), Who should die? (pp. 163–185). Oxford: Oxford University Press.
Zurück zum Zitat Cima, M., Tonnaer, F., & Hauser, M. D. (2010). Psychopaths know right from wrong but don’t care. Social Cognitive and Affective Neuroscience, 5(1), 59–67.CrossRef Cima, M., Tonnaer, F., & Hauser, M. D. (2010). Psychopaths know right from wrong but don’t care. Social Cognitive and Affective Neuroscience, 5(1), 59–67.CrossRef
Zurück zum Zitat Darwall, S. (1983). Impartial reason. New York: Cornell. Darwall, S. (1983). Impartial reason. New York: Cornell.
Zurück zum Zitat Davidson, D. (1964). Actions, reasons, and causes. Journal of Philosophy, 60(23), 685–700.CrossRef Davidson, D. (1964). Actions, reasons, and causes. Journal of Philosophy, 60(23), 685–700.CrossRef
Zurück zum Zitat Davidson, D. (1978). Intending. In Y. Yirmiahu (Ed.), Philosophy and history of action (pp. 41–60). Berlin: Springer.CrossRef Davidson, D. (1978). Intending. In Y. Yirmiahu (Ed.), Philosophy and history of action (pp. 41–60). Berlin: Springer.CrossRef
Zurück zum Zitat Docherty, B. (2012). Losing humanity: The case against killer robots, Report for the Human Rights Watch (pp. 39–41). New York: Human Rights Watch. Docherty, B. (2012). Losing humanity: The case against killer robots, Report for the Human Rights Watch (pp. 39–41). New York: Human Rights Watch.
Zurück zum Zitat Gibbard, A. (1990). Wise choices, apt feelings. Oxford: Clarendon. Gibbard, A. (1990). Wise choices, apt feelings. Oxford: Clarendon.
Zurück zum Zitat Guarini, M., & Bello, P. (2012). Robotic warfare: some challenges in moving from noncivilian to civilian theaters. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 129–144). Cambridge: MIT Press. Guarini, M., & Bello, P. (2012). Robotic warfare: some challenges in moving from noncivilian to civilian theaters. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 129–144). Cambridge: MIT Press.
Zurück zum Zitat Hurka, T. (2007). Liability and just cause. Ethics & International Affairs, 21(2), 199–218.CrossRef Hurka, T. (2007). Liability and just cause. Ethics & International Affairs, 21(2), 199–218.CrossRef
Zurück zum Zitat Jenkins, R., & Purves, D. (2016). Robots and respect: A response to Robert Sparrow. Ethics & International Affairs, 30(3), 391–400.CrossRef Jenkins, R., & Purves, D. (2016). Robots and respect: A response to Robert Sparrow. Ethics & International Affairs, 30(3), 391–400.CrossRef
Zurück zum Zitat Kahn, L. (2017). Military robots and the likelihood of armed conflict. In P. Lin, R. Jenkins, & K. Abney (Eds.) Robot Ethics 2.0. Oxford University Press, Oxford. Kahn, L. (2017). Military robots and the likelihood of armed conflict. In P. Lin, R. Jenkins, & K. Abney (Eds.) Robot Ethics 2.0. Oxford University Press, Oxford.
Zurück zum Zitat Korsgaard, C. (1996). The sources of normativity. Cambridge: Cambridge University Press.CrossRef Korsgaard, C. (1996). The sources of normativity. Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Surrey: Ashgate Publishing Limited. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Surrey: Ashgate Publishing Limited.
Zurück zum Zitat Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef
Zurück zum Zitat McNaughton, D., & Rawling, P. (1998). On defending deontology. Ratio, 11(1), 37–54.CrossRef McNaughton, D., & Rawling, P. (1998). On defending deontology. Ratio, 11(1), 37–54.CrossRef
Zurück zum Zitat Miller, S. (2010). The moral foundations of social institutions. Cambridge: Cambridge University Press. Miller, S. (2010). The moral foundations of social institutions. Cambridge: Cambridge University Press.
Zurück zum Zitat Nagel, T. (1972). War and massacre. Philosophy & Public Affairs, 1, 123–144. Nagel, T. (1972). War and massacre. Philosophy & Public Affairs, 1, 123–144.
Zurück zum Zitat Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books. Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books.
Zurück zum Zitat Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872.CrossRef Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872.CrossRef
Zurück zum Zitat Quinn, W. (1993). Morality and action. Cambridge: Cambridge University Press. Quinn, W. (1993). Morality and action. Cambridge: Cambridge University Press.
Zurück zum Zitat Rawls, J. (1999). A theory of justice (2nd ed.). Cambridge, MA: Belknap Press. Rawls, J. (1999). A theory of justice (2nd ed.). Cambridge, MA: Belknap Press.
Zurück zum Zitat Roff, H. M. (2013). Killing in war: Responsibility, liability, and lethal autonomous robots. In: F. Allhoff, N.G. Evans, A. Henschke, eds., Routledge handbook of ethics and war: Just war theory in the twenty-first century. Milton Park: Routledge. Roff, H. M. (2013). Killing in war: Responsibility, liability, and lethal autonomous robots. In: F. Allhoff, N.G. Evans, A. Henschke, eds., Routledge handbook of ethics and war: Just war theory in the twenty-first century. Milton Park: Routledge.
Zurück zum Zitat Scheffler, S. (1984). The rejection of consequentialism: A philosophical investigation of the considerations underlying rival moral conceptions. Oxford: Oxford University Press. Scheffler, S. (1984). The rejection of consequentialism: A philosophical investigation of the considerations underlying rival moral conceptions. Oxford: Oxford University Press.
Zurück zum Zitat Schmitt, M. N. (2012). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal, 531, 256. Schmitt, M. N. (2012). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal, 531, 256.
Zurück zum Zitat Schmitt, M. N., & Thurnher, J. S. (2012). Out of the loop: Autonomous weapon systems and the law of armed conflict. Harvard National Security Journal, 4, 231. Schmitt, M. N., & Thurnher, J. S. (2012). Out of the loop: Autonomous weapon systems and the law of armed conflict. Harvard National Security Journal, 4, 231.
Zurück zum Zitat Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 122.CrossRef Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 122.CrossRef
Zurück zum Zitat Skerker, M. (2016). An empirical defense of combatant moral equality. In When Soldiers Say No (pp. 77–87). Routledge. Skerker, M. (2016). An empirical defense of combatant moral equality. In When Soldiers Say No (pp. 77–87). Routledge.
Zurück zum Zitat Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef
Zurück zum Zitat Sparrow, R. (2015). Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender. International Law Studies, 91, 699–728. Sparrow, R. (2015). Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender. International Law Studies, 91, 699–728.
Zurück zum Zitat Sparrow, R. (2016a). Robots and RESPECT: Assessing the case against autonomous weapons systems. Ethics and International Affairs., 30(1), 93–116.MathSciNetCrossRef Sparrow, R. (2016a). Robots and RESPECT: Assessing the case against autonomous weapons systems. Ethics and International Affairs., 30(1), 93–116.MathSciNetCrossRef
Zurück zum Zitat Sparrow, R. (2016b). Robots as “evil means”? A rejoinder to Jenkins and Purves. Ethics and International Affairs, 30(3), 401–403.CrossRef Sparrow, R. (2016b). Robots as “evil means”? A rejoinder to Jenkins and Purves. Ethics and International Affairs, 30(3), 401–403.CrossRef
Zurück zum Zitat Waldron, J. (1993). Special ties and natural duties. Philosophy and Public Affairs, 22(1), 3–30. Waldron, J. (1993). Special ties and natural duties. Philosophy and Public Affairs, 22(1), 3–30.
Zurück zum Zitat Williams, B. (1993). Moral Luck. In D. Statman (Ed.), Moral luck (pp. 35–55). Albany: State University of New York Press. Williams, B. (1993). Moral Luck. In D. Statman (Ed.), Moral luck (pp. 35–55). Albany: State University of New York Press.
Metadaten
Titel
Autonomous weapons systems and the moral equality of combatants
verfasst von
Michael Skerker
Duncan Purves
Ryan Jenkins
Publikationsdatum
23.02.2020
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 3/2020
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-020-09528-0

Weitere Artikel der Ausgabe 3/2020

Ethics and Information Technology 3/2020 Zur Ausgabe

Premium Partner