Skip to main content
Erschienen in: Ethics and Information Technology 1/2023

01.03.2023 | Original Paper

Autonomous weapon systems and responsibility gaps: a taxonomy

verfasst von: Nathan Gabriel Wood

Erschienen in: Ethics and Information Technology | Ausgabe 1/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon systems”, and so the objection is too broad. In this article I present a taxonomic approach to the objection, examining a number of systems that would count as AWS under the prevalent definitions provided by the United States Department of Defense and the International Committee of the Red Cross, and I show that for virtually all such systems there is a clear locus of responsibility which presents itself as soon as one focuses on specific systems, rather than general notions of AWS. In developing these points, I also suggest a method for dealing with near-future types of AWS which may be thought to create situations where responsibility gaps can still arise. The main purpose of the arguments is, however, not to show that responsibility gaps do not exist or can be closed where they do exist. Rather, it is to highlight that any arguments surrounding AWS must be made with reference to specific weapon platforms imbued with specific abilities, subject to specific limitations, and deployed to specific times and places for specific purposes. More succinctly, the arguments show that we cannot and should not aim to treat AWS as if all of these shared all morally relevant features, but instead on a case-by-case basis. Thus, we must contend with the realities of weapons development and deployment, and tailor our arguments and conclusions to those realities, and with an eye to what facts obtain for particular systems fulfilling particular combat roles.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
Matthias (2004) and Asaro (2006) voice similar concerns, but with regards to robotic systems more generally, and not specifically in the military. For discussions rooted in international law, see (Wagner, 2014; Chengeta, 2016; Crootof, 2016).
 
2
Generally, any inability to assign responsibility creates a moral and legal problem, but this is a particular problem for the LOAC, given that it requires that war crimes be traceable to individuals (Solis, 2016 p. 544; Crootof, 2016, p. 1355) and that war criminals be held accountable for their crimes (Dinstein, 2016, pp. 39–40, 300; Solis, 2016, pp. 335–339). See Ch. 7.5 of (Zając, 2022) for a succinct synthesis of many of these points and further philosophical discussion of the responsibility gap.
 
3
The response to Sparrow provided in Schulzke (2012) is general as well, but his focus on the realities of responsibility distribution in military organizations captures an idea very close to that developed here below. Note also that many of the overly general statements about AWS are rooted in a lack of precision about what exactly is taken to be meant by “autonomous weapon systems” (e.g., Sparrow himself discusses a number of simpler autonomous weapons early on in his arguments, but in the core of his objection shifts the discussion to highly advanced agent-like systems). This highlights the need for clarity, a point developed in (Wood, forthcominga).
 
4
In fact, I believe such general arguments are prone to be too general in almost all instances, as the sheer variety of systems which will count as AWS means that any overarching claims are apt to be both too broad and narrow. This is precisely why we should look to narrow and cleanly circumscribed discussions of (types of) AWS when examining fine moral and legal points. This will become evident as the arguments below are developed.
 
5
International Committee of the Red Cross, (2021, p. 1). This definition also matches earlier formulations developed during expert meetings hosted by the ICRC. See, e.g., International Committee of the Red Cross (2014, p. 5). Similar views can also be found in the legal scholarship; e.g., (Crootof, 2015).
 
6
See, e.g., (Lokhorst & van den Hoven, 2012; Wood, 2020).
 
7
Note that prominent organizations advocating for a ban of AWS – such as Human Rights Watch or the Campaign to Stop Killer Robots – utilize a definition that is essentially the same as that developed by the U.S. DoD and the ICRC, and yet maintain that such systems both do not exist and would be incapable of operating under meaningful human control. See, e.g., (Human Rights Watch, 2012, 2016a, b, 2018); or the FAQ section of the Campaign to Stop Killer Robots webpage, at https://​www.​stopkillerrobots​.​org/​learn/​ (accessed May 31, 2021). These statements do not agree with one another and show the crucial need for clarity in and commitment to definitions and their implications, a point forcefully made in (Wood, forthcominga,b).
 
8
Which, it is worth noting, is not shared by Sparrow, who employs a much narrower agent-like view of AWS. See pp. 62–66.
 
9
Vincent Muller in fact argues that AWS can “increase the ability to hold humans accountable for war crimes” in a variety of ways. See (Müller, 2016), esp. pp. 73–78.
 
10
Many proponents of the objection are, however, unlikely to be satisfied with this weaker version, as their efforts aim to underpin a complete ban on AWS, something this weaker formulation will not and cannot sustain. However, the stronger version can only be sound if the definition of AWS is much narrower, and on such narrower definitions many currently existing and near-future systems will not count as AWS. Thus, the ban-advocate seems forced to either allow for permissible AWS (by virtue of utilizing the weaker objection) or argue against all AWS but admit that very few current or near-future systems count as AWS (by virtue of utilizing the stronger objection and modifying the definition of AWS to fit it).
 
11
Relatedly, there is value in distinguishing between responsibility as liability (backward-looking responsibility) versus responsibility as the capacity, motivation, and expectation of fulfilling some obligation (forward-looking). Both formulations capture essential aspects of our commonsense understanding of “responsibility”, but the backward-looking conception is central for legal questions, and can arguably subsume certain (though not all) elements of the forward-looking one as well. For the sake of brevity, we will move forward with an understanding of responsibility as liability, but there are connections between both views, and moral intricacies bound up in the forward-looking view which are well worth exploring more fully. See, e.g., (van de Poel, 2011).
 
12
Sharkey (2010, p. 378), referencing the arguments of Sharkey (2008).
 
13
See, respectively, Articles 48, 51.2, and 51.4 of Geneva Protocol I Additional to the Geneva Conventions (hereafter AP I).
 
14
In particular, AP I, Art. 51.4.b-c.
 
15
Mainly because such overarching conclusions are likely to always be too general. See note 4 above.
 
16
See (Müller, 2016) esp. pp. 74–75, for similar points.
 
17
Note that the nature of modern active protection systems like the Arena or Trophy mean that civilians or friendly combatants at a certain distance from the protected vehicle are apt to be put at risk. This, however, simply shows that combatants and especially commanders must understand and respond to these risks and their operational environments when deciding whether or not to rely on such systems.
 
18
According to DoDAAM, the Super aEgis II requires a human to actively unlock the turret’s firing capability, but the system was initially fully autonomous, and only gained a human-in-the-loop at the request of the company’s customers. See (Parkin, 2015).
 
19
Related to point-defense systems, area-defense or area-denial systems function in roughly the same fashion and with roughly the same targeting parameters and limitations as their fixed-location counterparts, except that area-defense/denial systems are tied to a larger geographic area. This larger area of operations makes them more capable of encountering situations where they may be incapable of making the moral and legal judgments necessary for permissibly using lethal force, but this is again a factor which operators are required to take into account when choosing whether or not to deploy such systems. Operators may, however, still be able to deploy these while dispensing with their obligations of due care (perhaps by ensuring that the area is sufficiently inaccessible to and far enough from civilian areas to ensure that the risks of them entering it are low enough to be proportionate to the military advantage of deploying that system).
 
20
In particular, Art. 4.2 of Protocol II (or Art. 5.2 of Amended Protocol II) set clear guidelines for the deployment of minefields. It is worth noting that the 1997 Ottawa Convention expressly forbids the use, production, or transfer of all anti-personnel mines, and has to date 164 states which are party to the treaty. However, the 32 non-signatory states include three of the five permanent security council members (China, Russia, and the United States) and over half of the world’s population, making the Ottawa Convention far from customary law, and therefore only binding for those signatory to it.
 
21
Importantly, AWS used in this fashion could also not be placed in such a way as to interdict civilians’ route to some basic necessities to which they would require access, as this would force them to cross the dangerous area. Thus, the standards of due care would dictate not just how weapons are deployed, but also where. Thanks to Maciej Zając for raising this point.
 
22
For example, currently existing AWS are fully capable of recognizing hands held high as an indication of surrender, and there are other ways that we could provide simple parameters for minimizing or eliminating errors. See, e.g., some solutions presented in Zajac (2022, esp. Chs. 6–7); and Wood (2022).
 
23
See Müller, (2016) esp. pp. 75–76, for similar arguments, and (Kastan, 2013) for exploration of evolving standards of due care. This clarified view also shows that humans themselves will, before even deploying an AWS, “execute the ‘critical functions’ like the decision as to who to kill and legal calculations on the lawfulness of an attack” via the setting of mission- or AWS-specific “target profiles”, thereby allowing them to retain meaningful human control and keeping AWS as “mere weapons in the hands of the warriors” (Chengeta, 2016, p. 4).
 
24
Galliott (2020 p. 163), citing Rogers et al. (1992). See also Sagan (1991, pp. 97–101) for a more detailed accounting of the Flight 655 disaster and the events leading up to it.
 
25
For arguments to the contrary (but focused on the statutes of the International Criminal Court, to which not all states are party) see (Bo, 2021) See also Bo et al. (2022) for extensive elaboration of the legal underpinnings of the responsibility gap objection.
 
26
See Thompson (1980) for the classic formulation, van de Poel et al. (2015) for extensive contemporary analysis, or van de Poel et al. (2012) and Galliott (2020) for, application of the concept to, respectively, climate change and autonomous weapons.
 
27
Thompson (1980) points out limitations of the hierarchical model of responsibility, but Thompson’s arguments concern democratic political institutions.
 
28
For a variety of approaches, see, e.g., (de Greef, 2016; Santoni de Sio & van den Hoven, 2018; Mecacci & Santoni de Sio, 2019; Wyatt & Galliott, 2021; Santoni de Sio & Mecacci, 2021).
 
29
Many thanks to an anonymous reviewer for pushing me on all of these objections.
 
30
See (Nowrot, 2015; Roff & Danks, 2018; Baker, 2022) for similar use of the animal analogy.
 
31
See, however, Wood (2022) for arguments on how AWS can be utilized for deescalation.
 
32
Many thanks to the two anonymous reviewers for pressing me on this point.
 
33
Nowrot (2015, pp. 130–131, and the citations therein).
 
34
We can also imagine other animal combatants, but for ease of presentation, I will in what follows limit the discussion to dogs. However, as with the varieties of AWS, the differences in animal capabilities is apt to impact on the fit of the analogy for each particular species as well.
 
35
Roff & Danks (2018) explore a similar idea of “AWS liaisons”. See esp., pp. 12–13.
 
Literatur
Zurück zum Zitat Asaro, P. M. (2006). What should we want from a robot ethic? International Review of Information Ethics, 6, 9–16.CrossRef Asaro, P. M. (2006). What should we want from a robot ethic? International Review of Information Ethics, 6, 9–16.CrossRef
Zurück zum Zitat Baker, D. (2022). Should We Ban Killer Robots? Political Theory Today. Polity. Baker, D. (2022). Should We Ban Killer Robots? Political Theory Today. Polity.
Zurück zum Zitat Bo, M. (2021). Autonomous weapons and the responsibility gap in light of the Mens Rea of the war crime of attacking civilians in the ICC statute. Journal of International Criminal Justice, 19(2), 275–299.CrossRef Bo, M. (2021). Autonomous weapons and the responsibility gap in light of the Mens Rea of the war crime of attacking civilians in the ICC statute. Journal of International Criminal Justice, 19(2), 275–299.CrossRef
Zurück zum Zitat Bo, M., Bruun, L., & Boulanin, V. (2022). Retaining human responsibility in the development and use of autonomous weapon systems: On accountability for violations of international humanitarian law involving AWS. Technical report, Stockholm International Peace Research Institute. Bo, M., Bruun, L., & Boulanin, V. (2022). Retaining human responsibility in the development and use of autonomous weapon systems: On accountability for violations of international humanitarian law involving AWS. Technical report, Stockholm International Peace Research Institute.
Zurück zum Zitat Chengeta, T. (2016). Accountability gap: Autonomous weapon systems and modes of responsibility in international law. Denver Journal of International Law & Policy, 45, 1–50. Chengeta, T. (2016). Accountability gap: Autonomous weapon systems and modes of responsibility in international law. Denver Journal of International Law & Policy, 45, 1–50.
Zurück zum Zitat Crootof, R. (2015). The killer robots are here: Legal and policy implications. Cardozo Law Review, 36, 1837–1916. Crootof, R. (2015). The killer robots are here: Legal and policy implications. Cardozo Law Review, 36, 1837–1916.
Zurück zum Zitat Crootof, R. (2016). War torts: Accountability for autonomous weapons. University of Pennsylvania Law Review, 164, 1347–1402. Crootof, R. (2016). War torts: Accountability for autonomous weapons. University of Pennsylvania Law Review, 164, 1347–1402.
Zurück zum Zitat Crootof, R. (2018). Autonomous weapon systems and the limits of analogy. Harvard National Security Journal, 9, 51–83. Crootof, R. (2018). Autonomous weapon systems and the limits of analogy. Harvard National Security Journal, 9, 51–83.
Zurück zum Zitat de Greef, T. (2016). Delegation and responsibility: A human-machine perspective. In E. Di Nucci & F. S. de Sio (Eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons (pp. 134–147). Routledge. de Greef, T. (2016). Delegation and responsibility: A human-machine perspective. In E. Di Nucci & F. S. de Sio (Eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons (pp. 134–147). Routledge.
Zurück zum Zitat Dinstein, Y. (2016). The conduct of hostilities under the law of international armed conflict (3rd ed.). Cambridge University Press. Dinstein, Y. (2016). The conduct of hostilities under the law of international armed conflict (3rd ed.). Cambridge University Press.
Zurück zum Zitat Galliott, J. (2020). No hands or many hands? Deproblematizing the case for lethal autonomous weapons systems. In S. C. Roach & A. E. Eckert (Eds.), Moral responsibility in twenty-first-century warfare: Just war theory and the ethical challenges of autonomous weapons systems (pp. 155–180). State University of New York Press. Galliott, J. (2020). No hands or many hands? Deproblematizing the case for lethal autonomous weapons systems. In S. C. Roach & A. E. Eckert (Eds.), Moral responsibility in twenty-first-century warfare: Just war theory and the ethical challenges of autonomous weapons systems (pp. 155–180). State University of New York Press.
Zurück zum Zitat Human Rights Watch. (2012). Losing humanity: The case against killer robots. Human Rights Watch: Technical report. Human Rights Watch. (2012). Losing humanity: The case against killer robots. Human Rights Watch: Technical report.
Zurück zum Zitat Human Rights Watch. (2016). Killer robots and the concept of meaningful human control. Human Rights Watch: Technical report. Human Rights Watch. (2016). Killer robots and the concept of meaningful human control. Human Rights Watch: Technical report.
Zurück zum Zitat Human Rights Watch. (2016). Making the case: The dangers of killer robots and the need for a preemptive ban. Human Rights Watch: Technical report. Human Rights Watch. (2016). Making the case: The dangers of killer robots and the need for a preemptive ban. Human Rights Watch: Technical report.
Zurück zum Zitat Human Rights Watch. (2018). Heed the call: A moral and legal imperative to ban killer robots. Human Rights Watch: Technical report. Human Rights Watch. (2018). Heed the call: A moral and legal imperative to ban killer robots. Human Rights Watch: Technical report.
Zurück zum Zitat International Committee of the Red Cross. (2014). Autonomous weapons systems: Technical, military, legal and humanitarian aspects. International Committee of the Red Cross: Technical report. International Committee of the Red Cross. (2014). Autonomous weapons systems: Technical, military, legal and humanitarian aspects. International Committee of the Red Cross: Technical report.
Zurück zum Zitat International Committee of the Red Cross. (2021). ICRC position on autonomous weapons systems. International Committee of the Red Cross: Technical report. International Committee of the Red Cross. (2021). ICRC position on autonomous weapons systems. International Committee of the Red Cross: Technical report.
Zurück zum Zitat Kastan, B. (2013). Autonomous weapons systems: A coming legal “singularity”? University of Illinois Journal of Law, Technology, & Policy, 45–82. Kastan, B. (2013). Autonomous weapons systems: A coming legal “singularity”? University of Illinois Journal of Law, Technology, & Policy, 45–82.
Zurück zum Zitat Leveringhaus, A. (2016). What’s so bad about killer robots? Journal of Applied Philosophy, 35(2), 341–358.CrossRef Leveringhaus, A. (2016). What’s so bad about killer robots? Journal of Applied Philosophy, 35(2), 341–358.CrossRef
Zurück zum Zitat Lokhorst, G.-J., & van den Hoven, J. (2012). Responsibility for military robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethics and social implications of robotics (pp. 145–156). MIT Press. Lokhorst, G.-J., & van den Hoven, J. (2012). Responsibility for military robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethics and social implications of robotics (pp. 145–156). MIT Press.
Zurück zum Zitat Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef
Zurück zum Zitat Mecacci, G., & Santoni de Sio, F. (2019). Meaningful human control as reason-responsiveness: The case of dual-mode vehicles. Ethics and Information Technology, 22(2), 103–115.CrossRef Mecacci, G., & Santoni de Sio, F. (2019). Meaningful human control as reason-responsiveness: The case of dual-mode vehicles. Ethics and Information Technology, 22(2), 103–115.CrossRef
Zurück zum Zitat Müller, V. C. (2016). Autonomous killer robots are probably good news. In E. Di Nucci & F. S. de Sio (Eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons (pp. 67–81). Routledge. Müller, V. C. (2016). Autonomous killer robots are probably good news. In E. Di Nucci & F. S. de Sio (Eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons (pp. 67–81). Routledge.
Zurück zum Zitat Nowrot, K. (2015). Animals at war: The status of “animal soldiers’’ under international humanitarian law. Historical Social Research, 40, 128–150. Nowrot, K. (2015). Animals at war: The status of “animal soldiers’’ under international humanitarian law. Historical Social Research, 40, 128–150.
Zurück zum Zitat Pagallo, U. (2011). Robots of just war: A legal perspective. Philosophy & Technology, 24(3), 307–323.CrossRef Pagallo, U. (2011). Robots of just war: A legal perspective. Philosophy & Technology, 24(3), 307–323.CrossRef
Zurück zum Zitat Robillard, M. (2018). No such thing as killer robots. Journal of Applied Philosophy, 35(4), 705–717.CrossRef Robillard, M. (2018). No such thing as killer robots. Journal of Applied Philosophy, 35(4), 705–717.CrossRef
Zurück zum Zitat Roff, H. M., & Danks, D. (2018). “Trust but verify’’: The difficulty of trusting autonomous weapons systems. Journal of Military Ethics, 17(1), 2–20.CrossRef Roff, H. M., & Danks, D. (2018). “Trust but verify’’: The difficulty of trusting autonomous weapons systems. Journal of Military Ethics, 17(1), 2–20.CrossRef
Zurück zum Zitat Rogers, W. C., Rogers, S. L., & Gregston, G. (1992). Storm Center: The USS Vincennes and Iran Air Flight 655: A personal account of tragedy and terrorism. Naval Institute Press. Rogers, W. C., Rogers, S. L., & Gregston, G. (1992). Storm Center: The USS Vincennes and Iran Air Flight 655: A personal account of tragedy and terrorism. Naval Institute Press.
Zurück zum Zitat Sagan, S. D. (1991). Rules of engagement. Security Studies, 1(1), 78–108.CrossRef Sagan, S. D. (1991). Rules of engagement. Security Studies, 1(1), 78–108.CrossRef
Zurück zum Zitat Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084.CrossRef Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084.CrossRef
Zurück zum Zitat Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 1–14.CrossRef Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 1–14.CrossRef
Zurück zum Zitat Schulzke, M. (2012). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203–219.CrossRef Schulzke, M. (2012). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203–219.CrossRef
Zurück zum Zitat Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 124–123.CrossRef Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 124–123.CrossRef
Zurück zum Zitat Sharkey, N. (2008). Grounds for discrimination: Autonomous robot weapons. RUSI Defence Systems, 11(2), 86–89. Sharkey, N. (2008). Grounds for discrimination: Autonomous robot weapons. RUSI Defence Systems, 11(2), 86–89.
Zurück zum Zitat Sharkey, N. (2010). Saying “no!” to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383. Sharkey, N. (2010). Saying “no!” to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383.
Zurück zum Zitat Simpson, T. W., & Müller, V. C. (2015). Just war and robots’ killings. The Philosophical Quarterly, 66(263), 302–322.CrossRef Simpson, T. W., & Müller, V. C. (2015). Just war and robots’ killings. The Philosophical Quarterly, 66(263), 302–322.CrossRef
Zurück zum Zitat Solis, G. D. (2016). The Law of Armed Conflict: International Humanitarian Law in War (2nd ed.). Cambridge University Press. Solis, G. D. (2016). The Law of Armed Conflict: International Humanitarian Law in War (2nd ed.). Cambridge University Press.
Zurück zum Zitat Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef
Zurück zum Zitat Thompson, D. F. (1980). Moral responsibility of public officials: The problem of many hands. American Political Science Review, 74(4), 905–916.CrossRef Thompson, D. F. (1980). Moral responsibility of public officials: The problem of many hands. American Political Science Review, 74(4), 905–916.CrossRef
Zurück zum Zitat United Nations (2021). Final report of the panel of experts on Libya. Technical report, UN Security Council. United Nations (2021). Final report of the panel of experts on Libya. Technical report, UN Security Council.
Zurück zum Zitat US Department of Defense. (2017). DoD Directive 3000.09. Technical report, United States Department of Defense. US Department of Defense. (2017). DoD Directive 3000.09. Technical report, United States Department of Defense.
Zurück zum Zitat van de Poel, I. (2011). The relation between forward-looking and backward-looking responsibility, pp 37–52. van de Poel, I. (2011). The relation between forward-looking and backward-looking responsibility, pp 37–52.
Zurück zum Zitat van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.CrossRef van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.CrossRef
Zurück zum Zitat van de Poel, I., Royakkers, L., & Zwart, S. D. (2015). Moral Responsibility and the Problem of Many Hands. Routledge. van de Poel, I., Royakkers, L., & Zwart, S. D. (2015). Moral Responsibility and the Problem of Many Hands. Routledge.
Zurück zum Zitat Wagner, M. (2014). The dehumanization of international humanitarian law: Legal, ethical, and political implications of autonomous weapon systems. Vanderbilt Journal of Transnational Law, 47, 1371. Wagner, M. (2014). The dehumanization of international humanitarian law: Legal, ethical, and political implications of autonomous weapon systems. Vanderbilt Journal of Transnational Law, 47, 1371.
Zurück zum Zitat Williams, A. P. (2015). Defining autonomy in systems: Challenges and solutions. In A. P. Williams & P. D. Scharre (Eds.), Autonomous systems: Issues for defense policymakers (pp. 27–62). NATO Communications and Information Agency. Williams, A. P. (2015). Defining autonomy in systems: Challenges and solutions. In A. P. Williams & P. D. Scharre (Eds.), Autonomous systems: Issues for defense policymakers (pp. 27–62). NATO Communications and Information Agency.
Zurück zum Zitat Wood, N. G. (2020). The problem with killer robots. Journal of Military Ethics, 19(3), 220–240.CrossRef Wood, N. G. (2020). The problem with killer robots. Journal of Military Ethics, 19(3), 220–240.CrossRef
Zurück zum Zitat Wood, N. G. (2022). Autonomous weapons systems and force short of war. Journal of Ethics and Emerging Technology, 32(2), 1–16.CrossRef Wood, N. G. (2022). Autonomous weapons systems and force short of war. Journal of Ethics and Emerging Technology, 32(2), 1–16.CrossRef
Zurück zum Zitat Wood, N. G. (forthcominga). Autonomous weapon systems: A clarification. Journal of Military Ethics, 22. Wood, N. G. (forthcominga). Autonomous weapon systems: A clarification. Journal of Military Ethics, 22.
Zurück zum Zitat Wood, N. G. (2022). To ban or regulate: The need for quality in critiques of autonomous weapon systems. In H. Pechlaner, M. de Rachewiltz, M. Walder, & E. Innerhofer (Eds.), Shaping the future: Sustainability and technology at the crossroads of arts and science. Wood, N. G. (2022). To ban or regulate: The need for quality in critiques of autonomous weapon systems. In H. Pechlaner, M. de Rachewiltz, M. Walder, & E. Innerhofer (Eds.), Shaping the future: Sustainability and technology at the crossroads of arts and science.
Zurück zum Zitat Wyatt, A., & Galliott, J. (2021). An empirical examination of the impact of cross-cultural perspectives on value sensitive design for autonomous systems. Information, 12(12), 527.CrossRef Wyatt, A., & Galliott, J. (2021). An empirical examination of the impact of cross-cultural perspectives on value sensitive design for autonomous systems. Information, 12(12), 527.CrossRef
Zurück zum Zitat Zając, M. (2022). Autonomous weapon systems from a just war theory perspective. PhD thesis, University of Warsaw. Zając, M. (2022). Autonomous weapon systems from a just war theory perspective. PhD thesis, University of Warsaw.
Metadaten
Titel
Autonomous weapon systems and responsibility gaps: a taxonomy
verfasst von
Nathan Gabriel Wood
Publikationsdatum
01.03.2023
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 1/2023
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-023-09690-1

Weitere Artikel der Ausgabe 1/2023

Ethics and Information Technology 1/2023 Zur Ausgabe

Premium Partner