Skip to main content
Top
Published in: Ethics and Information Technology 1/2023

Open Access 01-03-2023 | Original Paper

Autonomous Military Systems: collective responsibility and distributed burdens

Author: Niël Henk Conradie

Published in: Ethics and Information Technology | Issue 1/2023

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most likely be deployed. This will allow us to move beyond the solely individualist understandings of responsibility at work in most treatments of these cases, toward one that includes collective responsibility. This allows us to attribute collective responsibility to the collectives of which the AMS form a part, and to account for the distribution of burdens that follows from this attribution. I argue that this expansion of our responsibility practices will close at least some otherwise intractable techno-responsibility gaps.
Notes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

When considering the implications of introducing Autonomous Military Systems (AMS) into the realm of warfare, one of the immediate responses tends to be alarm at the prospect of so-called techno-responsibility gaps – cases where the attribution of moral responsibility seems to be called for, but where there appears to be no fitting bearers for such moral responsibility (Matthias, 2004; Sparrow, 2007; Himmelreich, 2019a). In this article I focus on laying some conceptual groundwork for responding to such moral responsibility gap concerns.
I have taken my focus to be what I will be referring to as Autonomous Military Systems in order to widen the scope of my considerations beyond only a discussion of so-called Lethal Autonomous Weapons Systems (LAWS – the eponymous “killer robots” central to Sparrow’s (2007) seminal paper). Though these sorts of AMS certainly deserve moral attention, the issue of responsibility gaps and my arguments here are not limited to them. I am concerned with the sorts of emerging military technologies that threaten to open responsibility gaps, and as has been persuasively argued by Van Rompaey (2019: 35), “the essential qualities of [these] emerging or current technologies are not that they are weaponized, individualized, and embodied”. They are not necessarily weaponized, nor are they necessarily lethal in a direct sense. Rather, the responsibility concerns about the introduction of AMS are taken to centre on their autonomous nature. Although how to understand autonomy has been the subject of heated philosophical debate for a long time (Dworkin, 1988; Feinberg, 1989; Christman, 2015; Mele, 2001 among others), the “autonomous” component of AMS is usually taken to refer to the ability to decide and act without human input or direction, or what Liu calls discretionary autonomy (2016). Thus, an autonomous system of this sort would be one where the system can make a discernment as to what to do, and then proceed to do so without a human being in the loop at any stage. It is precisely this discretionary aspect that is taken to open a gap in my responsibility practices, since notions of autonomy understood in this way – as a form of individual control – is intimately related to notions of responsibility. One might say: If a weapon system was fully autonomous, then it has to be the final bearer of responsibility for its conduct. Thus, the argument is that because the system was in sufficiently independent control of the conduct there can be no transference or tracing of responsibility “through” the system to any (human) individual further back in the causal chain. The decisions are the systems own, and so too then are their outcomes. At the same time, though, since AMS’s seem at face value to be unfitting bearers of responsibility as it is usually understood, we have an apparent gap.
Yet, there is reason to think that this conclusion is premature. It is built on an individualist understanding of responsibility, where the only legitimate bearers of responsibility are individuals. However, decisions and actions are rarely free of influence from others in the way that this individualist notion of responsibility requires (Levy, 2018; Niker et al., 2021): they are often are the result of collective reasoning, agreement or negotiation (French 1979; List and Pettit 2011; Collins 2018). Consider the paradigmatic case of a decision reached by the executive committee of a corporation to undertake a given course of action. None of the individual members have the ability to bring about the decision, it is only as a collective that it can be achieved. This can be seen as threatening to our responsibility practices, as it may be the case that we often cannot identify discrete individuals who acted autonomously, in the individualist sense described above, to which to attach either blame or praise. The fear is somewhat akin to that present in the case of AMS, in that the concern is the opening up of a responsibility gap. However, our responsibility practices have proven resilient and adaptable, and we have developed methods of legitimately attributing responsibility even in these difficult cases. For example, we employ collective responsibility, which permits us to take a collective as an agent and as a potential bearer of responsibility. This does not mean that individual responsibility is not also brought to bear on direct perpetrators, but that at times there are outcomes (or elements of outcomes) that can only properly be traced to the shared actions and decisions of a collective and in these cases, it is collective, rather than individual, responsibility that is appropriate. If we return to our committee of board members, it seems correct that responsibility for their decision rests on the committee as a collective.
My argument is that a similar adaptability in our practices can help us resolve at least some of the concerns about responsibility gaps raised by AMS. If we move beyond a limiting and individualist notion of responsibility, by taking seriously the role of collective responsibility, then we can close some otherwise intractable gaps involving AMS, most notably those cases that I will refer to as pure collective responsibility cases. The work proceeds as follows: I make some clarifications on the scope of my argument by providing a more fleshed out explanation of the sorts of technologies I am grouping under the AMS label and clarifying my focus on responsibility as accountability, rather than any of the various other guises of responsibility. I unpack the traditional approach to understanding the relationship between autonomy and responsibility in individual cases. In particular, I uncover the crucial role played by the notions of control and agency, and how this individual approach does open the risk of AMS opening a unique techno-responsibility gap. Moving on to my own proposed solution, I first discuss how the notions of control and agency can be extended to collectives, thus opening the possibility for collective responsibility to address at least some of our responsibility gap worries. Thereafter, I respond to some influential objections to the notion of collective responsibility. Finally, I conclude with a discussion of the different possible contexts that an AMS could be deployed in and map out how to account for the attribution of responsibility across these deployments.

A clarification of scope: AMS and responsibility as accountability

AMS come in a wide variety of forms. Some do possess the lethal weaponry characteristic of LAWS, such as the SGR-A1 sentry gun deployed by the South Korean military along the demilitarized zone (Weinberger, 2014) or the Low Cost Autonomous Attack System (LOCAAS) in use by the U.S. military (Sparrow, 2007). However, it also includes non-lethal technologies such as the Global Hawk spy plane as well as unembodied technologies such as “the Disposition Matrix for suggesting drone strike targets, software that can suggest profiles of interest, make connections between individuals based on drone footage, and biometric data among other sources” (Van Rompaey 2017). The unifying feature of all these varied technologies is that they operate with discretionary autonomy – they can receive inputs, apply algorithmic processing to these inputs, and based on this processing can then discriminate between a number of optional outputs which are then enacted. For some this output may be to designate a target, for others to refocus their surveillance apparatus, and for yet others to fire a weapon. Though on the face of it this lattermost output can most immediately result in morally significant harms (and this no doubt explains at least some of why LAWS have received as much focus as they have in the literature), it is important to note that any of these outputs can potentially do the same. The Disposition Matrix could settle upon a civilian target, or a Global Hawk may decide to turn its surveillance systems toward one area of interest over another, and by doing so fail to provide intelligence necessary to avert a civilian massacre. It is precisely this ability to bring about what we may call discretionary harm that makes applications of AMS uniquely susceptible to techno-responsibility gaps.
Responsibility is widely recognized to have more than one “face” or what I will here call variant.1 Watson influentially differentiated between two variants of responsibility – which he calls responsibility as accountability and responsibility as attributability (2004: 266). For Watson, the primary difference between the responses legitimated by the two variants of responsibility is that only accountability seems to justify sanction or the usual Strawsonian reactive attitudes of anger, resentment, and indignation.2 The reactive attitudes are moral responses that can either be directed by the victim of wrongdoing toward the perpetrator (moral anger, resentment), by a bystander at the perpetrator (indignation) or by the perpetrator to herself or himself (guilt) (Strawson, 1962). In contrast, Watson argues that nothing further than the attribution of moral fault is justified by attributability. In terms of the conditions necessary for each variant, he argues that attributability requires that the behaviour be reflective of the agent’s valuational system – whereas accountability (as it involves the imposition of adverse treatment to the blamed) requires that the agent meet the “control principle” or some requirement of avoidability (Watson, 2004: 274), a requirement I discuss in the next section.
Though Watson’s approach is undoubtedly foundational, my own view is closer to the more recent tripartite theory of responsibility introduced by David Shoemaker (2015: 16), according to which there are three types of responsibility, with each conditioned by a different facet of the quality of an agent’s will: attributability, which is conditioned by the agent’s quality of character, answerability, which is conditioned by the agent’s quality of judgement, and finally accountability, which is conditioned by the agent’s quality of regard. Each of these types of responsibility also legitimate different syndromes of reactive attitudes toward the agent in question, namely: disdain/admiration, regret/pride, and anger/gratitude respectively. So, poor quality of character would make an agent an apt target for disdain, whereas a sufficiently good quality of regard would make an agent an apt target for gratitude, and so on. Importantly for my overarching argument, Shoemaker (2015a: 224–225) identifies accountability and the quality of regard as that type of responsibility and that facet of will where control plays a necessary role. It is also the case that whereas the responses legitimated by attributability are somewhat passive, those legitimated by accountability “implicate confrontation of a kind” (Shoemaker, 2015a: 87). It is characteristic of accountability, in contrast to attributability, that it is not properly expressed without targeted communication to the accountable party.
In part due to the limitations of space I will be focused on accountability and will not be considering the other variants of responsibility. However, my choice of focus on accountability is deliberate. The reason for doing so is that I wish to deal with cases that do in fact involve the imposition of adverse treatment on the blameworthy – even if this is understood as gently as Shoemaker’s notion of communicating moral anger to the target – and it is accountability alone of the three variants that is linked to such sanction and “confrontation.” It is due to this focus that I will be taking the control as the relevant condition for moral responsibility in the succeeding section, as this condition is uniquely and pertinently bound to accountability. Having clarified the scope of my arguments, I now turn to a consideration of the roles of autonomy and control in individual and collective responsibility.

Individualised autonomy and individual responsibility

In the traditional picture of responsibility, one of the necessary conditions that an agent must meet in order to be responsible for some conduct, X, is that the agent must have brought about X with a certain degree of autonomy.3 That is, the agent was not physically forced to do X by some other agent, was not hypnotised or in some other way dramatically manipulated in order to bring it about. Performing X was something that the agent did themselves, free (to some meaningful extent) from the influence of external factors – most pertinently other agents. This requirement for autonomy is often cashed out in terms of a control condition on responsibility: for the agent to be responsible for X, performing X had to be under the agent’s control in some suitable way.4
Leaving aside for a moment what is meant by “in some suitable way”, autonomy, on this picture, is an individualised notion as it depends on the independence of an individual agent from external forces. The bearer of responsibility is always a discrete individual. This is not to say that a number of individuals might not all be individually responsible for their particular contributions to a given outcome. But even in such a case each individual is personally (or individually) responsible for only their contributions, and only those that fell within their autonomy, within their individual control. In the terminology I will be using in this piece, an individual that meets this requisite control condition – and so can be said to have operated with autonomous agency vis-à-vis the outcome in question – is a fitting bearer of responsibility for outcome and being such a fitting bearer makes one a justified target of certain responses. The justification here is pro tanto and can potentially be outweighed by consideration external to the fittingness of responsibility. Imagine a situation where you are warned that if target your partner with moral responses the next time they lie to you, your partner will be killed. In such an, admittedly outrageous, case it would obviously not be all-things-considered justified to so-target your partner. The consequence of this is that the individual is a fitting target for moral responses on the basis of her or his conduct, where such responses include Strawsonian reactive attitudes and possibly sanctions depending on the nature of the conduct (Strawson, 1962; Watson, 2004; Scanlon, 2015).
I will take appropriate control here to be guidance control.5 For an agent to have guidance control over an action, the actual mechanism resulting in the action (called the actual-sequence mechanism) must be moderately reasons-responsive.6 This kind of reasons-responsiveness requires that the mechanism must be receptive to a significant number of reasons (including moral reasons) and must be reactive enough to adjust behaviour in light of, and in accordance with, at least some of these reasons. It is important to note that the reasons under consideration here are normative reasons, and that the agent must be able to govern her conduct in reaction to these reasons qua their nature as normative reasons. This last point is well expressed by Levy (2011: 116) when he states that the agent must “properly appreciate the significance of bringing about that state of affairs, where the significance of a state of affairs consists of the features which provide reasons for bringing it about (often, but not always, moral reasons)”. This does not mean that the agent must necessarily have the belief that a given consideration in favour of acting is a normative reason to act, but that the agent recognises the “call to action”, or the normative force of the features in question.
There is also an important and plausible connection between reasons-responsive control and agency. This is stressed by Velleman (2014) when he makes the point that:
[i]f a person’s constitution includes a causal mechanism that has the function of basing his behavior on reasons, then that mechanism is, functionally speaking, the locus of his agency, and its control over his behavior amounts to his self-control, or autonomy.
If we take Velleman’s point that the mechanism of reasons-responsive control is the locus of agency seriously, then it seems that such a mechanism that is sensitive to moral reasons is plausibly the locus of moral agency.
Finally, my control condition is not only concerned with actions, but also omissions. This is what I will call oversight, the fact that an entity can justifiably be said to have control over not only a certain set of the things that they do, but also a certain class of the things that they don’t. Oversight is the kind of control that an editor has over the contents of a magazine: the editor may not in fact check every word or page of the final product, but provided that she was appropriately reasons-responsive while undertaking the task, she has oversight control over the final product – and, relevantly here, is accountable for both what she changed and what she didn’t.7
It is important to recognise that AMS do not (presently) meet this standard of control. The “autonomous” in AMS is not synonymous with the notion of autonomy typically employed in the responsibility literature. These systems are not (yet) capable of appreciating the special (in these cases moral) normative force of such reasons. The best analogy may be to an idealized psychopath, an agent who has the capacity to respond to reasons but lacks the ability to appreciate moral normativity. Such an agent could, of course, learn to theoretically identify those considerations that we take to be moral reasons, but what they lack is the ability to recognise the special normative force of such reasons (Fischer & Ravizza, 1998; Shoemaker, 2011). It is precisely because AMS do possess a “weaker” form of autonomy – the independence from external influence and ability to direct their own actions without interference – while not possessing the richer sort required for traditional responsibility, that there is a possibility of a responsibility gap.
It is worth noting here that the notion of control that I have outlined here as necessary for responsibility coincides neatly – and expands upon – how the notion of Meaningful Human Control (MHC) has developed in the recent literature (Santoni de Sio and van den Hoven 2018; Mecacci and Santoni de Sio 2020). In many respects my project shares the goal of closing responsibility gaps by ensuring that the correct sorts of control are present in order to facilitate appropriate responsibility tracking. One way to interpret my project here is as a defence of the possibility that collectives can instantiate this sort of control, and further that if we are to take responsibility gaps seriously, we will have to allow collectives to at times be the bearers of blame.

A role for collective responsibility

Our decisions and actions are often interrelated and dependent on the influence of others (Walter & Ross, 2014, Nagel 2013). Far from removing autonomy, it can even be the case that this interrelatedness functions as a support for autonomy (Nagel, 2015). Furthermore, at times, collectives can give rise to decisions and actions that cannot be traced to any single individual (List & Pettit, 2011; Björnsson, 2020). Particularly in the case of hierarchical system, of which the military is an almost stereotypical example (Osiel, 1998, 2011). They are complexes composed of many parts, in the military case at least sometimes including humans and AMS. At least some outcomes resulting from decisions in these complex, integrated, and collective contexts will be untraceable to any individual. I will refer to such instances as cases of pure collective agency. A paradigmatic example of such an instance can be found in Pettit’s (2001, 2007) so-called ‘discursive dilemma’. This is the situation in which a committee has to make a judgement about a set of interconnected questions. A yes–no vote is taken on each of the questions. The combination of the majority judgements subsequently yields a decision for which the committee members are unanimously opposed even though there is a majority in favour for each of the questions. If the final decision is made on these majorities, then we can have a situation in which despite unanimous opposition to the final verdict in favour it will still go through. And since this opposition is unanimous, Pettit claims that none of the individuals can be held responsible.
Despite – or perhaps because of – the lack of individual culpability, in practice we often apply responsibility collectively in these cases. This is evidenced in instances such as corporate misdoing – consider the recent Dieselgate scandal in the Germany automotive industry (Rogers & Spector, 2017) – or in the case of international sanctions levered against States – consider here examples like Apartheid South Africa (Keller 1993). Despite ongoing philosophical wrangling’s as to its conceptual nature or even its existence, general human practice as well as the law have long held that responsibility, under various names, can be attributed, and sanctions levered, against collectives – i.e. that moral responsibility, as well as legal responsibility, can have a collective, corporate, systemic, or distributed face (Schockley 2007; Crawford 2007; Nollkaemper and van der Wilt 2009). When the possibility of collectives being responsible is discussed, the usual go-to examples are corporations (with nation-states usually cropping up with the second most frequency).8 This may well be because corporations best fit the theoretical requirements usually posited as necessary for collective responsibility – the most important of these being, first, collective agency and then beyond that moral agency.
By the first of these is meant that certain activities can only be accurately identified as “belonging to” the collective, rather than one or more of its members. We seem to do this often and easily enough in common parlance, such as when we may say that “Apple introduced a new software system”, or that “BP launched a new oil rig”, rather than identifying these actions with individuals. There is a fulsome debate as to whether or not this linguistic evidence (or indeed any evidence) should lead us to think that these corporations are in fact collective agents, often taking the form of methodological objections to collective responsibility on the basis that such collective agents do not exist – usually because they eliminatively reduce to the individual agents who are their members.9 A crucial point of contention is whether or not collectives can instantiate the propositional attitudes necessary for intentionality (intentions, desires, beliefs), as this is taken as a necessary condition for agency. Intentional action is usually understood to be action under a certain sort of control that is responsive to reasons and there is an important and plausible connection between this sort of control and agency (Anscombe, 1963; Davidson, 2001; Bratman, 1987, 2009).
As my aim in this work is not to defend the very possibility of collective responsibility – which is certainly well beyond the scope of my arguments here – but rather to see whether it can help in resolving certain sorts of techno-responsibility gaps, I will be assuming that these conditions for collective agency can be met. To this end, I am following the general strategy of defenders of collective agency by adopting a functionalist approach to propositional attitudes (and other mental states) 10 for the purposes of this paper, while not ruling out that a non-functionalist approach could also be possible. According to this approach, “[a]n entity has propositional attitudes, if and only if it has states with the appropriate functional profiles.” Where “[f]unctional profiles are the full specification of the causal roles states of a system have to play to realise propositional attitudes” (Strohmaier, 2020: 1902–1903). However, collective agency is not in of itself enough to make a collective a fitting bearer of responsibility. It must further be shown that the collective can exercise moral agency. One influential view on what is required for such agency holds that only entities that are morally reasons-responsive, that is, capable of recognizing moral reasons (qua moral reasons) and controlling their conduct in light of these reasons, can be open to moral responsibility (Fischer & Ravizza, 1998). In addition, post Strawson’s seminal work, there is the requirement that for an entity to be an fitting bearer of responsibility it must be able to experience, recognise, and respond to the reactive attitudes relevant to our responsibility practices (Strawson, 1962; Tollefsen, 2003; Shoemaker, 2015). As with collective agency, I will be assuming that it is possible for collectives to be morally reasons-responsive and instantiate states that have functional profiles matching to the Strawsonian reactive attitudes.
Individual responsibility and collective responsibility differ in terms of the responses justified. In the former case, the individual in question is the exclusively fittingly target for moral responses, in the case of collective responsibility, it is the collective that is the fitting target of our moral responses, though the members of the collective can be the justified targets of these responses in their role as members of the collective. This justified distribution of the responses to the members of a group is why responsibility in the case of collectives is a distributed responsibility.11 This does not mean that individuals within the group might not warrant individual responsibility for their contributions and actions that may include more punitive responses (French 1998; Crawford 2007; List and Pettit 2011). Neither collective nor individual responsibility is taken as exclusive, but what determines a given agent’s fittingness for one or the other is different. To be a fitting bearer for collective responsibility on account of some conduct, that conduct must fall under the oversight control of a collective, and you must be an integrated part of that collective. To be a fitting bearer of individual responsibility requires that the conduct can be directly traced to you (is not emergent from a collective), and that you had sufficient individualised autonomy in performing it. There can also be cases where a member of a collective exercises their individualised autonomy in order to perform some conduct at their own discretion but under the oversight of the whole. As they had oversight over the conduct in question the collective has responsibility. On the other hand, the performing agent is open to individual responsibility, as the conduct can be traced to them and they acted on their own discretion.
Furthermore, responses in the collective case may be different than in the individual. The former may not bring to bear the full gamut of reactive attitudes that we do in cases of the latter. And when considering sanctions, the kinds of responses justified by collective responsibility are more concerned with the reactive adjustments of duties12 than with retributive punishment. There are two facets to this adjustment in duties: the first is backward-looking and concerns the diminishment of duties that others owe the collective. There is also a forward-looking facet: the collective acquires a duty to improve for the future, or the collective’s existing duty to improve for the future is strengthened. Thus, a collective that is morally on the hook has a duty to commit to improvement, to pursue restitution, and to recognise themselves as blameworthy if able.
Where the relationship between AMS and the collectives of which they are a part are concerned, there are a few important things to note. The first of these is that for an AMS to be part of a collective it must be sufficiently integrated into the reasons-responsive mechanism of that collective. As I understand this requirement, it involves the integration of the technology into the decision-making structures of the collective. This is in line with a very similar view spelt out by Collins (2019), when she describes the constituents of a collective as united under a rationally operated group-level decision-making procedure. Björnsson (forthcoming) further unpacks this demand as follows:
Members are united under a group level-procedure if (i) they are at least tacitly presumptively committed to abide by it, (ii) the procedure is operationally distinct from the personal decision-making procedure of any one member, and (iii) the enactment of the decisions requires actions on parts of the members.
I take and AMS to be self-evidently able to meet requirements (ii) and (iii), and provided it is given a functionalist reading of the sort I am endorsing here (i) is also achievable by an AMS. For (ii), the addition of the AMS would not alter the operational distinctiveness of the group-level procedure: a reconnaissance provided by an autonomously operating Global Hawk – the content of which is determined by its discretion in where to focus its sensory capacities – can be an essential and integrated part of a collective decision-making procedure determining the movements of a platoon of soldiers, yet the contribution of the Global Hawk does not displace the group-level procedure that will in fact result in a decision. Similarly, (iii) an AMS can serve as an enactor of a collective decision. Contra Collins, I would refine the requirement of (iii) to read: the enactment of the decisions requires actions on parts of (at least some of) the members. It is not necessary that every member be an enactor in all cases. Finally, for an AMS to meet requirement (i) we will need to unpack a functionalist profile of tacit presumptive commitment, and the AMS will need to instantiate this profile. This does not seem to be an impossible task. Without pretence that this is a full functionalist accounting of the topic, it is plausible that an entity having a presumptive commitment requires that the entity is capable of discretion, and we have certain expectations about its conduct: they will pursue a certain course, and in the event of a failure to do so this would create a demand for explanation. For this to be tacit merely means that all of this can be unspoken – there need be no particular recognition of the presence of these states at time t for the profile to be instantiated at time t. In the case in question, this “certain course” would be adherence to the decisions of the collective. If this functional interpretation, or one similar to it, is plausible, then there does not appear to be a clear barrier here to an AMS (be it a LAWS or a military network) being unified as a member of a collective. An AMS, which by my definition in this work possess discretionary control, that did not abide by the decisions of the military organisations within which they operate will quickly find itself removed or reprogrammed. Their adherence to the decisions of the collective is presumed, and failures to adhere would be aberrations demanding explanation.
What this does mean is that AMS that are deployed outside of these sorts of decision-making structures would not be part of a collective, and so any gaps that they might open would not be resolvable by an appeal to collective responsibility. That said, as I will discuss in the next section, this sort of isolated deployment is strikingly unlikely within a military context, and even where it does occur there are other potential ways to deal with responsibility in these cases.
The second thing to note about the relationship between AMS and collectives is that the AMS, lacking moral agency itself, can never receive individual blame and is thus only susceptible to blame qua membership in the collective. Recall that the collective fulfils its duties through the activities, usually coordinated, of its members, qua members. As such, meeting the commitment to improve or provide redress might require the activity of the AMS as much as the human members.13 A perhaps artificial but useful parallel might be drawn to a collective that includes among its members an idealized psychopath (as introduced in Sect. 3). Though such an individual undoubtedly possesses some degree of reason-responsiveness – and may indeed be responsive to a wider selection of reasons in a given context than their fellows – they do not possess moral agency and would be an illegitimate bearer of moral responsibility as accountability.14 However, they may well be integrated parts of the decision-making procedure of a collective which holistically possesses moral agency as it is holistically responsive to moral reasons as per the demands of guidance control. Indeed, a crucial feature of cases of pure collective responsibility is that in such cases the failure to respond to moral reasons cannot be felicitously addressed to any individual, and it will often be the case that several members of the collective were in no position to respond to moral reasons at all. The analogy with an AMS follows from the fact that such a system has a similar blindness to the unique normative force of moral reasons (and so cannot be responsive to moral reasons qua moral reasons) as the idealized psychopath. However, again like the idealized psychopath, the system has the ability to contribute to the decision-making procedure of the collective through the contribution of its own (limited) reasons-responsiveness.
The third and final note concerns the potential analogies and disanalogies with between the collectives in which AMS are likely to be integrated and the more regular examples of collectives from the literature. At first brush it would seem that the collective relevant to an AMS would either be some subunit of a national military (perhaps even the national military itself) or the gestalt of designers, investors, purveyors, and users of the technology. While I will discuss how exactly I argue we should individuate collectives in Sect. 5, it could certainly be objected that the latter of these do not exhibit the sort of integrated decision-making structures necessary as do corporations or military organisations. Without these features it seems implausible to attribute collective agency in such cases. On this matter I agree, collective responsibility will not carry us far here if we are trying to capture the entire diffused complex of individuals that contribute to the development and eventual deployment of an AMS. On the other hand, this process will likely involve several collectives (by Collins’ criteria) along the way. In many cases these will be corporations, or regulative committees, or similar bodies that have integrated, reasons-responsive decision-making procedures. My focus here, however, is rather on the former sort of collective relevant to AMS: the military organisation in which it is embedded. These organisations exhibit entrenched, hierarchical decision-making procedures, designed to maintain effective command and control even in the tumult of armed conflict, making them suitable candidates for collective agency and collective responsibility.

Objections to collective responsibility and responses

Despite its prevalence in practice, the notion of collective responsibility is controversial within the philosophical literature. Two important objections made against collective responsibility are: (1) that only autonomous agents – and by “autonomous” here is meant individualised autonomy, even if it is not explicitly identified as such – can be bearers of moral responsibility, and that collectives cannot be such agents (Lewis, 1948; Corlett, 2001; Narveson, 2002). And (2), that the employment of collective responsibility leads to scapegoating or other unfair attributions of responsibility (Sverdlik, 1987; Reiff, 2008). I will call this second objection the fairness objection. The first of these is a methodological concern, whereas the second is normative, and we respond to both of these objections.
The response to these worries centres on the recognition that individualised autonomy is not the only sort of autonomy that can ground responsibility. What is necessary for collective responsibility is not some individual notion of autonomy, but rather a kind of collective autonomy, conditioned on an appropriate sort of collective control (List & Pettit, 2011; Hess, 2014). This in turn leads to two subsequent concerns: first, the methodological worry that it is unclear how to differentiate between the control exercised by the collective, and that exercised by its constituent individuals. More precisely the worry is that there is nothing more to the former than the latter, and so there is no genuine notion of collective control. The second concern is a facet of the fairness objection already described: that many members of a collective may have no control over a given action taken by the collective.
To the first concern, differentiating the control exercised by the collective from that of its constituent individuals, we must remind ourselves what control means in this instance. An outcome is under appropriate control when this outcome results from an appropriately reasons-responsive mechanism. In turn, agency stands in an intimate relationship with such control, in that the locus of agency and reasons-responsive mechanism are bound together. In light of this, I follow Levy in placing agency, rather than the agent, in the center of our picture of attributions of agency (2018). As Levy argues, instead of following Fischer in asking, “Is the agent appropriately responsive and reactive to reasons?”, we should instead ask, “Is appropriate responsiveness and reactiveness to reasons exemplified?” There is nothing built into the notion of guidance control that requires that the reasons-responsive mechanism must be a sub-agential one belonging to an individual agent. Hess (2014: 5), whose lead I am explicitly following here, has made this point persuasively, pointing out that collective agents can possess the “sane and stable pattern” in their actual and potential responses to normative reasons required by reasons-responsiveness. What matters is whether the conduct is the result of a mechanism that is appropriately responsive and reactive to normative reasons. Where this mechanism is to be found, we will find the locus of agency, and so too the agent – individual or otherwise.15
One way to determine where this locus lies is to determine whether or not the reasons to which the actual-sequence mechanism resulting in some conduct was responsive to individual reasons or collective reasons. A mechanism responsive to collective reasons cannot be wholly bounded within an individual, as an individual cannot have collective reasons. This fact about individual and collective reasons follows from an application of ought-implies-can. To paraphrase an example introduced by Björnsson (2020): Imagine three men standing on the shore of a body of water, with ten children drowning out at sea and a single raft sitting on the beach. Each man is only able, individually, to rescue a single child, whereas if the raft is employed all ten children can be saved. The raft is heavy enough that it would take two of the men pushing together in order to get the raft out to sea. In this case, Björnsson argues that each individual ought to rescue a child, which is what each individual can achieve. But, collectively, the men as a group ought to rescue all ten the children as this is something they can do collectively On this point, I am in agreement with his interpretation. If the men collectively push out the boat and rescue the children, then the mechanism resides in the men as a collective, and it is this collective that can be said to have control over the outcome. Thus it is the collective that would be the fitting bearer of praise. In a more explicitly collective context, if the development (or more likely part of the development) of a morally laudable new product can only be achieved through the decision-making structures and process of a corporation, and the combined co-operation of its members, then this development would be under collective control16 and the collective could be praiseworthy on this basis.17 Given the account of control I am working with, none of these individuals can be said to have control over the outcome, but the collective can.
More difficult to parse are those cases in the grey area where both individual and collective control are at work on the same outcome. Imagine a case where Matthew, an employee of a large firm, notices that the printer is jammed. Fortunately, his employer maintains a convenient closet full of supplies to be used for the unjamming, to which Matthew avails himself. In this case the collective clearly contributes to the unjamming, and so does Matthew – he could have seen to the unjamming without assistance but he did not. These mixed cases can be difficult to unpack in terms of attributing responsibility, and there is no easy answer for parsing the relative contributions of individuals and the collective in such instances, however this is not necessary for the argument here. To answer the concern surrounding separating out individual from collective control, what is necessary is to show that in at least some cases the control exercised over an outcome cannot be reduced to the control of the contributing constituent members. As cases of this sort seem plausible, I now turn to the fairness objection (2), the threat of scapegoating and unfair blame.
Scapegoating is a serious concern, not unique to cases of collective responsibility. As scapegoating is a misapplication of our responsibility practices, it can occur across all sorts of these practices. The prevalence of the threat means that it is important to consider whether or not collective responsibility is more susceptible or uniquely susceptible to it. The central worry about scapegoating in cases of collective responsibility is that at times when we hold collectives responsible, the sanctions we impose place disproportionate, unfair, burdens on group members who did not have individual control over the outcome, and as such we may feel are undeserving (see Reiff 2008 for strongly pressing this objection). Concerns like this are often illustrated with examples: at least some cases of international sanctions, such as those applied to Iraq, where the suffering was felt not primarily by the elite groups that are such sanction’s more fitting targets, but rather by the average persons, many of whom have no oversight control over the conduct being punished whatsoever (Gordon 2011).
I have two responses to this. Firstly, it is first necessary to disentangle what it is that follows from moral responsibility. It is the central feature of responsibility that it legitimates a number of moral responses toward the blameworthy and praiseworthy. Considering only cases of negative moral valence, these moral responses can be broadly divided between those responses that are constitutive of moral responsibility – blame – and those that are legitimated by blameworthiness but not constitutive of moral responsibility – sanctions. But this is not all that follows from moral responsibility: these moral responses bring about consequences, on those who are targeted by these responses, but also on others. These further consequences, when they result in disutility, I call burdens. Following from this, in the collective context, I take blame and sanctions aimed at a collective to be moral responses justified by collective responsibility, while burdens I take to be the consequences of the former pair causing disutility. By blame here is meant the application of negative Strawsonian reactive attitudes, but also the adjustment of our relationship with the bearer of blame (Scanlon, 2015). When considering sanctions, these can come in a variety of forms, and not all cases of collective responsibility will legitimate the application of sanctions. Though the presence of such responsibility may well be a necessary condition to be met for a sanction to be legitimate, there are often further considerations that must be accounted for, other conditions to be met, before a sanction would be justified. Sanctions are penalties that require some modification in the behaviour or material state of the bearer of blame – paying a fine, paying back your friend, sleeping on the coach at your partner’s demand. Many sanctions take the form of adjustments of duties. There are two facets to this adjustment in duties: the first is backward-looking and concerns the diminishment of duties that others owe the collective. There is also a forward-looking facet: the collective acquires a duty to improve for the future, or the collective’s existing duty to improve for the future is strengthened. Thus, a collective that is morally on the hook has a duty to commit to improvement, to pursue restitution, and to recognise themselves as blameworthy if able. Finally, the burdens of collective responsibility are the costs that are borne by anyone as a result of the imposition of either blame or sanctions on the collective.
In a case of pure collective responsibility, blame and sanction are only legitimately attributable to the collective agent in question, not its members as individuals. To illustrate using an example case:
Protest: The Detachment*, a collective within the military of some nation-state, has made the collective decision to, and have been, deploying autonomous cruise missiles to engage opposition targets. A result of this has been the deaths of several civilians, who were mistaken for opposing soldiery by the system. When this chain of events is uncovered, many members of the community gather in order to protest the actions of the Detachment* and to demand restitution. In part, this protest takes the form of chanted slogans, recriminations, and expletives directed at members of Detachment*, in their role as such members, when they enter and leave the plant. At least some of the families of these members who witness this blaming suffer emotional and psychological harm as a result.
Is this harm to the families of members unfair? My answer is no, it is not. Importantly, the harm is not the result of moral responses directed at the families themselves, if that were the case this would be an illegitimate application of responsibility and so wrong on its face. If the children of some of the members of Detachment* faced bullying on account of the situation, this would be illegitimate – and immoral, as the bullies have moral reason not to undertake such activity illegitimately. The burdens being experienced here are the consequences of legitimate responses. And we already have a type of responsibility where the fact that faultless – even uninvolved – individuals will at times have to bear the burdens resulting from blame or sanction does not usually move us to support the elimination of these practices: individual responsibility. It is frequently the case that when we hold individuals legitimately responsible, this results in adverse impacts on others. For an extreme example, if a man is blameworthy for being a murderer, then it is plausible that we conclude that some sanction involving separation from society is legitimate. However, if this man was the sole breadwinner for his family, then applying this sanction will result in difficulties for his family. Less severe, a spouse holding their partner responsible for infidelity might take the sanction of divorce to be legitimate, which can have adverse impacts on their children. When one begins investigating cases of individual responsibility, one quickly finds that this sort of thing is far from uncommon. For this reason, I take collective responsibility to be in good company here: if the distribution of burdens in cases like Protest are unfair, and this unfairness is sufficient to dismiss collective responsibility from our morally defensible practices, then the same must apply to individual responsibility. As such an extreme revisionist position on individual responsibility is unlikely to be more than a fringe one, this should count as support for collective responsibility remaining within out pantheon of responsibility concepts and practices. At the least, it should shift the burden of proof onto those arguing that collective responsibility is open to unique fairness concerns.
By arguing that collective responsibility finds itself in good company I am not contending that there are no cases where collective responsibility will be misapplied, at least sometimes resulting in harm and unfairness, or where following the legitimate practice might result in outcomes with lower utility than if we did not apply the practice – or applied it differently. Rather, my argument is that since these same concerns are raised by individual responsibility practices, we should approach these worries about collective responsibility in the same way we approach worries about individual responsibility: by clarifying the correct conditions and rules that govern the practice, enforcing their correct application, and recognizing what might count as justified grounds for violating these rules. It is not the case that we are merely discussing a subject interesting in theory: we already do employ collective responsibility practices all the time. Cases like Protest are regular occurrences, and are only the most visible demonstrations of collective responsibility. Our best response to this is not to reject collective responsibility, it is already a part of our practices – warts and all. Rather, the best way to mitigate or negate the possible harms resulting from the misapplication of collective responsibility is to take it seriously and to better emphasize the proper rules for its legitimate application.
Secondly, only those who had or contributed to oversight control are members the relevant collective, and so can justifiably bear burdens as members. The consequence of this agency-first approach for the fairness concern is that since only that set of constituent individuals who are integrated elements of the reasons-responsive mechanism count as the collective that bears accountability, hence blame only falls on those who, in fact, exercised agency, the fear of unfair blaming is drastically mitigated.
For example if Google releases a product that immorally spies on users’ data and passes this data on for profit, the blameworthy collective in this case will be that set of individuals who are integrated in and exemplify the reasons-responsive mechanism resulting in this action. This is oversight control, contribution here includes actions and omissions: those who planned the release, performed the actions necessary for the release, as well as those who omitted to prevent it when they had the capacity, all count as contributing provided they were structurally integrated into the process. This parallels the argument of Heersmink (2017: 446) that “extended cognitive systems [systems consisting of individual agents and one or more artifacts] have agency when the artifact is fully transparent and densely integrated into the cognitive processes of its user, whereas distributed systems without central control lack agency.” Let us call this set of individuals Google*. I argue that it is Google* that is open to collective accountable for releasing the harmful product, in that I would be justified in adopting suitable, morally-loaded, responses to Google*. These could include fining Google*, refusing to co-operate with it, demanding recompense or apology, targeting it with reactive attitudes or disbanding the collective. This further step, of extending responsibility to Google*, is opposed by Heersmink. He rather argues that since “artifacts are not intentional agents and cannot experience the consequences of repercussions aimed towards them…[and] are furthermore unresponsive to threats of punishments”, artefact-human systems cannot bear responsibility (Heersmink 2017: 445).
However, as I have already discussed, it is not clear that the transfer approach can be successful. Furthermore, in response to Heersmink’s concerns, it is the burdens that follow from responsibility responses aimed at the collective that can be justifiably applied to the constituent members (including artificial members) in cases of pure collective responsibility, not the responses themselves. It is irrelevant whether any given member can experience the consequences or be responsive to threat of punishment, only whether the collective can do so.
Both constituent individuals and the supervening agent may exemplify the necessary reasons-responsiveness for responsibility, and though this may raise practical difficulties for determining how we should respond, there is no theoretical contradiction for both individuals and collective to bear responsibility. Individuals who are constituent of Google* are not off the hook if they, for example, voted against the release but were overruled by the majority on the board. It is also worth noting that intention is not required.18 Consider that there are many cases where we hold agents responsible for unintended outcomes: such as in cases of negligence, forgetting, and unintended side-effects. There is a way for a member who opposed the activity to escape collective responsibility: leave the collective. An individual that takes such a course may still be obligated to inform the authorities, and be individually accountable if they do not, but by leaving the group they are no longer open to collective responsibility. That said, there are some cases where leaving a group may be infeasible, be this due to social relations, important obligations, or other such commitments. Imagine I wish to exit Google* out of disagreement with their conduct, but I need to feed my children, and this is the only available employment and so I remain despite my disagreement. In a case like this I content that I would still be implicated in the blame of Google.* My regret might well alter assessments of my moral character, but it does not change my accountability qua membership. However, it is important to note that “leaving” here means exiting the decision-making procedure that unifies Google* - not necessarily quitting Google. All the employees of Google that do not contribute to the reasons-responsive mechanism (through either action or omission) – that are not members of Google* – are not considered for collective responsibility. This is analogous to the movement toward smart sanctions, sanctions that target either individuals or carefully delineated collectives to ensure that the bearers of blame are the ones receiving the response.19 Having clarified this principled approach to determining the justifiability of responsibility attribution, scapegoating will be minimised and collective responsibility seems no more troublesome as regards fairness concerns than other forms of responsibility.
Consider a military committee tasked with determining whether or not a certain weapon system should be deployed. All the members of the committee can be said to have oversight control if each had an ability to contribute to the final decision, and all recognise its legitimacy. However, if a member of the committee was to oppose the majority and, upon realising that the decision would go against her or him, quit the committee in protest, and then took what steps where available to her to oppose it from without the collective, then she would no longer be a responsible. The same would hold for soldiers subordinate to the committee unaware of its decisions and lacking any capacity to intervene – unless they took the further step of implementing the decision in question.
Given the prevalence of collective responsibility in practice, its time-worn recognition in the law, and the responses outlined above, it seems reasonable to adopt collective responsibility as a vital tool in helping us to justifiably attribute responsibility in cases of complex, integrated, and interrelated activity. In the next section I consider the implications for AMS.

Mapping responsibility for AMS

AMS will be embedded in a network of relationships to other actors: handlers, commanders, AI systems, intelligence officers, and so on.20 This is not a necessary feature of AMS usage, but rather a consequence that follows from the context of modern warfare. However, it is worth taking a moment to consider the different possible types of contexts in which an AMS could be found: (i) those where a human has direct control over the outcome in questions, and so the autonomy of the AMS is side-lined, (ii) those where the AMS functions fully autonomously without any human control, and (iii) those cases where the AMS is functioning as part of a collective that includes human control, but the AMS it itself a contributor to the collective’s oversight control, and thus its collective agency.
My argument for closing the gap rests on two claims: First, it is only when the AMS is embedded in a collective action context that these gaps arise, and second, the conditions necessary to ground collective responsibility is present in these cases – ergo grounding responsibility. Thus, I argue that cases such as (i) raise nothing unique to consider, as in these cases the autonomous character of the AMS plays no morally relevant part in proceedings. Type (ii) cases are more interesting and raise the spectre of a responsibility gap. It must be stressed that in this case the AMS is opaque to external control, including oversight, if it was not, then this would be a type (iii) case instead. An actual type (ii) case can be challenging to imagine, since – at least today - it is very unlikely that AMS will be deployed in this fashion. However, this may turn out to be the case, and so this possibility must be considered. That said, I assume that these cases do not, in fact, raise unique responsibility worries. To understand why, consider the following analogy:
War Dogs
a group of dogs are trained over several years to hunt and kill enemy combatants and are employed to clear tunnels and jungle redoubts as well as guard base perimeters. At one point, the commander simply releases them into an area without any means of oversight over their conduct. The dogs, who were trained in such a way as to try to ensure that they will not attack civilians, act unexpectedly and in fact do just this.
In this case it seems correct to say that the commander in question is responsible for this.21 An AMS released without oversight would be no different conceptually. By releasing an autonomous entity that may exhibit unforeseen conduct into a situation where they could do harm, without oversight, is already a pro tanto morally problematic thing to do. This would be a classic example of tracing, were any harm to be brought about by the decision can be traced back to this benighting and morally criticisable action. Though this analogy fits most readily with embodied systems, it is not limited to them only: we can imagine a disembodied system – such as the Disposition Matrix – being deployed without any human interactants involved at any stage in its operation. In this very unrealistic but possible scenario, the system is left to its own devices, picking targets that are then engaged by drone attacks. In this case the responsibility falls on the individual or collective who decided to allow the system to operate in this entirely oversight free manner. This is not to say that all cases where an AMS operates with full lack of oversight need involve moral blame, it requires that the outputs of the AMS in question must be foreseeably capable of causing harm or other morally problematic outcomes. This foreseeability is bounded by reasonable conscientiousness – the degree of foreseeability that we could reasonably demand of the agents involved.22 This is, thus, not a case unique to AMS, nor is it a case where responsibility cannot be tracked, ergo there is no unique gap here.
This leaves (iii). This is how the majority of AMS are likely to be employed on the battlefield. What we find here is that a collective can, indeed, be said to have oversight over the outcomes to which the AMS functioning contributes. Thus, the collective has collective responsibility for said outcomes. This collective not only includes all those human participants that contributed to oversight control, but also the AMS involved. It is true, as we have already discussed, that the AMS will still be an unfitting bearer of individual responsibility, whereas the human members of the collective would be potentially fitting bearers of such. However, the AMS may be open to at least some of the burdens justified by collective responsibility, though how this would look in practice will require a thorough exploration of both the actual capacities of the systems in question, as well as the nature of the harm caused in the particular case.

Concluding remarks

I argued moving beyond only consideration of individual responsibility and including collective responsibility in our proverbial toolkits is an important step to take toward resolving worrying techno-responsibility gaps potentially raised by the deployment of AMS. Such systems will almost universally be deployed embedded in deeply interrelated and hierarchical contexts, functioning as members of collectives. Provided such a collective exercises oversight control over its conduct, it will possess collective moral agency, and so be open to collective responsibility on the basis of said conduct. Additionally, most – if not all – cases where AMS are deployed outside of such collectives, and thus outside of oversight, do not raise unique responsibility gaps as the deployment of AMS – or indeed human or non-human animal – in this manner is already criticisably irresponsible, with the responsibility resting at the feet of whoever made the deployment decision, be this an individual or a collective.

Declarations

Conflict of interest

The author declares no conflicts of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Footnotes
1
See Gary Watson, Agency and Answerability (2004) for the seminal introduction of this idea. It has been expanded on by David Shoemaker (2015).
 
2
I have only indicated the reactive attitudes involved in blaming here, but equivalent attitudes are also present in cases of praise: e.g. gratitude by a direct beneficiary toward a praiseworthy person, admiration by a bystander toward the same.
 
3
see John M. Fischer and Mark Ravizza (1998); Neil Levy (2011); Merel Noorman and Deborah G. Johnson, (2014) examples.
 
4
For support of this position see Fischer and Ravizza (1998), Levy (2011), McKenna (2012), and Sartorio (2016). However, this is not an unanimously agreed to condition: see Harry Frankfurt (1988); Watson (2004), Nomy Arpaly and Timothy Schroeder (2013); and Shoemaker (2015) for examples of those who argue that a control condition is not necessary for responsibility.
 
5
See Fischer and Ravizza (1998) for their initial collaboration, and Fischer (2007) for further development.
 
6
Guidance control also has an “ownership condition”. The idea is that for an agent to have appropriate ownership of the actual-sequence mechanism, it must be the case that she takes responsibility for the mechanisms giving rise to her actions. This condition is the result of a need to resolve so-called manipulation cases – of which Pereboom’s Four-Case Argument is a good example (2009, 2013). However, this is a flawed condition, as it is not the case that only those who see themselves as legitimate targets for reactive attitudes are open to responsibility (see Mele 2006). Moreover, it is not clear that there are not manipulation cases that the ownership condition does not rule out (see Pereboom (2007). I take a better solution for counteracting manipulation concerns to be to endorse McKenna’s hard-line response to such cases (McKenna, 2008).
 
7
The editor may not be exclusively responsible for the contents, as the actual writers also bear responsibility.
 
8
For influential examples see French (1984) and Hess (2014).
 
9
see Rönnegard 2013, 2015; Ludwig 2014; Rupert 2014 as examples of those pushing this objection, and Himmelreich 2019b for a defense against such objections.
 
10
See List and Pettit 2011; Huebner 2014; Tollefsen 2015; Epstein 2017; Strohmaier 2020 for examples of those following this strategy.
 
11
Schulzke (2012) has also argued for a notion of distributed responsibility that he contends can close the responsibility gap supposedly opened by AMS. My notion differs from his in responsibility is necessarily borne not by the members of the collective, but that the collective can be a bearer of responsibility itself. This aspect of my account will be discussed below.
 
12
What Scanlon (2015) calls substantive responsibility. Also, note that such adjustments are also part of individual responsibility.
 
13
The duty of improvement falls on the collective, and every member of the collective has a duty to improve the whole. This can at times mean improving oneself or trying to improve others or improving aspects of the group’s context. An AMS might possess the ability to seek improvement or not, depending on the features of the particular system. In either case the duty to improve the AMS may be there; the salient difference is whether or not the AMS can contribute to the improvement.
 
14
As per Shoemaker (2015) I take individuals like this to be potentially open to attributability and answerability, but not accountability.
 
15
Levy (2018: 200) is himself quick to point out that the options here are not exhaustively between individual agents and collective agents. It may be the case that the locus of agency rests on an extended agent – where “the agent may owe her powers and capacities to other individuals and institutions without coming to constitute a higher-level entity”.
 
16
This is similar to Shockley’s notion of “coordinating control” (2007).
 
17
This is not to rule out that individuals could still be praiseworthy where and when they appropriately respond to individual reasons in the course of the development. In this case two different activities – tracked to two different agents – are being praised, it is not that the individual and the collective share praise for the same thing.
 
18
Contra Sverdlik (1987), a lack of intention to do ill is not exculpatory.
 
19
See Gordon (2011). See also David Cortright and George A. Lopez (2002).
 
20
Schulzke (2012: 204), but this insight permeates his entire argument.
 
21
Having noted the fact that many military actions involve (or are) collective actions, the commander’s decision is likely embedded within such a collective context, unless the order was unilateral and without oversight by others in the chain of command. If the order was not unilateral, then there will be collective responsibility in this case. This same example can also be run with humans: a commander that allows a group of soldiers to conduct themselves without oversight is responsible for their (mis)conduct, either individually or collectively depending on the facts of the case.
 
22
Sincere thanks to an anonymous reviewer for demanding greater clarity on this point.
 
Literature
go back to reference Anscombe, E. (1963). Intention (2nd ed.). Oxford: Basil Blackwell. Anscombe, E. (1963). Intention (2nd ed.). Oxford: Basil Blackwell.
go back to reference Arpaly, N., & Schroeder, T. (2013). Praise of Desire. Oxford Scholarship Online. Arpaly, N., & Schroeder, T. (2013). Praise of Desire. Oxford Scholarship Online.
go back to reference Björnsson, G. (2020). “Collective responsibility and collective obligations without collective moral agents” in Barzagan-Forward, and Tollefsen, D. (eds.) Handbook of collective responsibility. Björnsson, G. (2020). “Collective responsibility and collective obligations without collective moral agents” in Barzagan-Forward, and Tollefsen, D. (eds.) Handbook of collective responsibility.
go back to reference Björnsson, G. forthcoming. Group duties without decision-making procedures.Journal of Social Ontology. Björnsson, G. forthcoming. Group duties without decision-making procedures.Journal of Social Ontology.
go back to reference Bratman, M. (1987). Intention, plans, and practical reason. Cambridge, MA: Harvard University Press. Bratman, M. (1987). Intention, plans, and practical reason. Cambridge, MA: Harvard University Press.
go back to reference Bratman, M. (2009). Intention, practical rationality, and self-governance. Ethics, 119, 411–443.CrossRef Bratman, M. (2009). Intention, practical rationality, and self-governance. Ethics, 119, 411–443.CrossRef
go back to reference Collins, S. (2019). Group Duties: their existence and their implications for individuals. Oxford Scholarship Online. Collins, S. (2019). Group Duties: their existence and their implications for individuals. Oxford Scholarship Online.
go back to reference Corlett, J. A. (2001). Collective Moral responsibility. Journal of Social Philosophy, 32, 573–584.CrossRef Corlett, J. A. (2001). Collective Moral responsibility. Journal of Social Philosophy, 32, 573–584.CrossRef
go back to reference Cortright, D., & Lopez, G. (Eds.). (2002). A (eds.). Smart Sanctions:Targeting Economic Statecraft. Rowman & Littlefield. Cortright, D., & Lopez, G. (Eds.). (2002). A (eds.). Smart Sanctions:Targeting Economic Statecraft. Rowman & Littlefield.
go back to reference Crawford, N. C. (2007). Individual and collective Moral responsibility for systemic military atrocity. The Journal of Political Philosophy, 15(2), 187–212.CrossRef Crawford, N. C. (2007). Individual and collective Moral responsibility for systemic military atrocity. The Journal of Political Philosophy, 15(2), 187–212.CrossRef
go back to reference Davidson, D. (2001). Essays on actions and events (2nd ed.). Oxford: Clarendon Press.CrossRef Davidson, D. (2001). Essays on actions and events (2nd ed.). Oxford: Clarendon Press.CrossRef
go back to reference Dworkin, G. (1988). The theory and practice of autonomy. Cambridge University Press. Dworkin, G. (1988). The theory and practice of autonomy. Cambridge University Press.
go back to reference Epstein, B. (2017). What are social groups? Their metaphysics and how to classify them. Synthese, 196, 4899–4932.CrossRef Epstein, B. (2017). What are social groups? Their metaphysics and how to classify them. Synthese, 196, 4899–4932.CrossRef
go back to reference Feinberg, J. (1989). Autonomy. In J. P. Christman (Ed.), The Inner Citadel. Essays on Individual Autonomy. Feinberg, J. (1989). Autonomy. In J. P. Christman (Ed.), The Inner Citadel. Essays on Individual Autonomy.
go back to reference Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: a theory of Moral responsibility. New York: Cambridge University Press.CrossRef Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: a theory of Moral responsibility. New York: Cambridge University Press.CrossRef
go back to reference Fischer, J. M. (2007). “Compatibilism” in Fischer, J. M., Kane, R., Pereboom, D. and Vargas, M. 2007. Four Views on Free Will. Singapore: Blackwell Publishing. Fischer, J. M. (2007). “Compatibilism” in Fischer, J. M., Kane, R., Pereboom, D. and Vargas, M. 2007. Four Views on Free Will. Singapore: Blackwell Publishing.
go back to reference Frankfurt, H. (1988). The importance of what we care about. Cambridge/New York/Melbourne: Cambridge University Press.CrossRef Frankfurt, H. (1988). The importance of what we care about. Cambridge/New York/Melbourne: Cambridge University Press.CrossRef
go back to reference French, P. (Ed.). (1998). Individual and collective responsibility. Rochester, VT: Schenkman. French, P. (Ed.). (1998). Individual and collective responsibility. Rochester, VT: Schenkman.
go back to reference Gordon, J. (2011). Smart Sanctions Revisited. Ethics & International Affairs, 25(3), 315–335.CrossRef Gordon, J. (2011). Smart Sanctions Revisited. Ethics & International Affairs, 25(3), 315–335.CrossRef
go back to reference Hess, K. M. (2014). The free will of corporations. Philosophical Studies, 168(1), 241–260.CrossRef Hess, K. M. (2014). The free will of corporations. Philosophical Studies, 168(1), 241–260.CrossRef
go back to reference Huebner, B. (2014). Macrocogntiion: a theory of distributed minds and collective intentionality. New York: Oxford University Press. Huebner, B. (2014). Macrocogntiion: a theory of distributed minds and collective intentionality. New York: Oxford University Press.
go back to reference Kenneth, Shockley (2007) Programming Collective Control. Journal of Social Philosophy 38(3) 442-455 10.1111/j.1467-9833.2007.00390.x CrossRef Kenneth, Shockley (2007) Programming Collective Control. Journal of Social Philosophy 38(3) 442-455 10.1111/j.1467-9833.2007.00390.x CrossRef
go back to reference Levy, N. (2011). Hard luck: how luck undermines free Will and Moral responsibility. Oxford Scholarship Online. Levy, N. (2011). Hard luck: how luck undermines free Will and Moral responsibility. Oxford Scholarship Online.
go back to reference Levy, N. (2018). “Socializing Responsibility” in Hutchison, K., MacKenzie, C., and Oshana, M. (eds.) 2018. Social Dimensions of Moral Responsibility. Oxford University Press. Levy, N. (2018). “Socializing Responsibility” in Hutchison, K., MacKenzie, C., and Oshana, M. (eds.) 2018. Social Dimensions of Moral Responsibility. Oxford University Press.
go back to reference Lewis, H. D. (1948).Collective Responsibility. Philosophy.24:3–18. Lewis, H. D. (1948).Collective Responsibility. Philosophy.24:3–18.
go back to reference List, C., & Pettit, P. (2011). Group Agency: the possibility, design, and Status of Corporate Agents. Oxford Scholarship Online. List, C., & Pettit, P. (2011). Group Agency: the possibility, design, and Status of Corporate Agents. Oxford Scholarship Online.
go back to reference Liu, H. Y. (2016). In N. Bhuta, S. Beck, & R. Geiß (Eds.), Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapon systems. Law, Ethics, Policy: Autonomous Weapon Systems. Liu, H. Y. (2016). In N. Bhuta, S. Beck, & R. Geiß (Eds.), Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapon systems. Law, Ethics, Policy: Autonomous Weapon Systems.
go back to reference Ludwig, K. (2014). In F. Hindriks, & G. Preyer (Eds.), “The ontology of collective action” in from individual to collective intentionality. Oxford: Oxford University Press: Sara Chant. Ludwig, K. (2014). In F. Hindriks, & G. Preyer (Eds.), “The ontology of collective action” in from individual to collective intentionality. Oxford: Oxford University Press: Sara Chant.
go back to reference Matthias, A. (2004). The responsibility gap in ascribing responsibility for the actions of automata.Ethics and Information Technology. 175(6). Matthias, A. (2004). The responsibility gap in ascribing responsibility for the actions of automata.Ethics and Information Technology. 175(6).
go back to reference McKenna, M. (2008). Compatibilism & Desert: critical comments on “Four views on Free Will”. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 144(1), 3–13.CrossRef McKenna, M. (2008). Compatibilism & Desert: critical comments on “Four views on Free Will”. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 144(1), 3–13.CrossRef
go back to reference McKenna, M. (2012). “Directed Blame and Conversation” in Coates, J. D. and Tognazzini, N. A. (eds.) 2012. Blame: Its Nature and Norms. Oxford Scholarship Online. McKenna, M. (2012). “Directed Blame and Conversation” in Coates, J. D. and Tognazzini, N. A. (eds.) 2012. Blame: Its Nature and Norms. Oxford Scholarship Online.
go back to reference Mele, A. (2001). Autonomous Agents: From Self Control to Autonomy. Oxford University Press: New York. Mele, A. (2001). Autonomous Agents: From Self Control to Autonomy. Oxford University Press: New York.
go back to reference Mele, A. R. (2006). Fischer and Ravizza on Moral responsibility. The Journal of Ethics, 10(3), 283–294.CrossRef Mele, A. R. (2006). Fischer and Ravizza on Moral responsibility. The Journal of Ethics, 10(3), 283–294.CrossRef
go back to reference Nagel, S. K. (2015). When Aid is a good thing: trusting Relationships as Autonomy Support in Health Care settings. The American Journal of Bioethics, 15(10), 49–51.CrossRef Nagel, S. K. (2015). When Aid is a good thing: trusting Relationships as Autonomy Support in Health Care settings. The American Journal of Bioethics, 15(10), 49–51.CrossRef
go back to reference Narveson, J. (2002). Collective responsibility. Journal of Ethics, 6, 179–198.CrossRef Narveson, J. (2002). Collective responsibility. Journal of Ethics, 6, 179–198.CrossRef
go back to reference Niker, F., Felsen, G., Nagel, S. K., & Reiner, P. B. (2021). Autonomy, Evidence-Responsiveness, and the Ethics of Influence. In M. Blitz, & J. C. Bublitz (Eds.), Neuroscience and the future of Freedom of Thought. Hampshire: Palgrave-Macmillan. Niker, F., Felsen, G., Nagel, S. K., & Reiner, P. B. (2021). Autonomy, Evidence-Responsiveness, and the Ethics of Influence. In M. Blitz, & J. C. Bublitz (Eds.), Neuroscience and the future of Freedom of Thought. Hampshire: Palgrave-Macmillan.
go back to reference Noorman, M., & Johnson, D. G. (2014). Negotiating autonomy and responsibility in military robots. Ethics of Information Technology, 16, 51–62.CrossRef Noorman, M., & Johnson, D. G. (2014). Negotiating autonomy and responsibility in military robots. Ethics of Information Technology, 16, 51–62.CrossRef
go back to reference Osiel, M. J. (1998). Obeying orders: atrocity, military doctrine, and the Law of War. California Law Review, 86(5), 939–1129.CrossRef Osiel, M. J. (1998). Obeying orders: atrocity, military doctrine, and the Law of War. California Law Review, 86(5), 939–1129.CrossRef
go back to reference Osiel, M. J. (2011). Making sense of mass atrocity. Cambridge University Press. Osiel, M. J. (2011). Making sense of mass atrocity. Cambridge University Press.
go back to reference Pereboom, D. (2007). “Hard Incompatibilism” in Fischer, J. M., Kane, R., Pereboom, D. and Vargas, M. 2007. Four Views on Free Will. Singapore: Blackwell Publishing. Pereboom, D. (2007). “Hard Incompatibilism” in Fischer, J. M., Kane, R., Pereboom, D. and Vargas, M. 2007. Four Views on Free Will. Singapore: Blackwell Publishing.
go back to reference Pereboom, D. (2009). Hard Incompatibilism and Its Rivals. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition. 144(1):21–33. Pereboom, D. (2009). Hard Incompatibilism and Its Rivals. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition. 144(1):21–33.
go back to reference Pereboom, D. (2013). “Free Will Skepticism, Blame, and Obligation” in Tognazzini, N. and Coates, D. J. (eds.) 2013. Blame: Its Nature and Norms. New York: Oxford University Press. Pereboom, D. (2013). “Free Will Skepticism, Blame, and Obligation” in Tognazzini, N. and Coates, D. J. (eds.) 2013. Blame: Its Nature and Norms. New York: Oxford University Press.
go back to reference Pettit, P. (2001). A theory of Freedom: from the psychology to the politics of Agency. Cambridge: Polity. Pettit, P. (2001). A theory of Freedom: from the psychology to the politics of Agency. Cambridge: Polity.
go back to reference Reiff, M. (2008). Terrorism, Retribution, and collective responsibility. Social Theory and Practice, 28(3), 442–455. Reiff, M. (2008). Terrorism, Retribution, and collective responsibility. Social Theory and Practice, 28(3), 442–455.
go back to reference Richard, Heersmink (2017) Distributed Cognition and Distributed Morality: Agency Artifacts and Systems. Science and Engineering Ethics 23(2) 431-448 10.1007/s11948-016-9802-1 CrossRef Richard, Heersmink (2017) Distributed Cognition and Distributed Morality: Agency Artifacts and Systems. Science and Engineering Ethics 23(2) 431-448 10.1007/s11948-016-9802-1 CrossRef
go back to reference Rönnegard, D. (2013). How autonomy alone debunks corporate Moral Agency. Business & Professional Ethics Journal, 32, 77–106.CrossRef Rönnegard, D. (2013). How autonomy alone debunks corporate Moral Agency. Business & Professional Ethics Journal, 32, 77–106.CrossRef
go back to reference Rönnegard, D. (2015). The fallacy of corporate Moral Agency, Issues in Business Ethics. Dordrecht: Springer.CrossRef Rönnegard, D. (2015). The fallacy of corporate Moral Agency, Issues in Business Ethics. Dordrecht: Springer.CrossRef
go back to reference Rupert, R. (2014). In F. Hindriks, & G. Preyer (Eds.), “Against group cognitive states” in from individual to collective intentionality. Oxford: Oxford University Press: Sara Chant. Rupert, R. (2014). In F. Hindriks, & G. Preyer (Eds.), “Against group cognitive states” in from individual to collective intentionality. Oxford: Oxford University Press: Sara Chant.
go back to reference Sartorio, C. (2016). A partial defense of the actual-sequence model of Freedom. Ethics, 20, 107–120.CrossRef Sartorio, C. (2016). A partial defense of the actual-sequence model of Freedom. Ethics, 20, 107–120.CrossRef
go back to reference Scanlon, T. M. (2015). “Forms and Conditions of Responsibility” in Clarke, R., McKenna, M., and Smith, A. M. (eds.) 2015. The Nature of Moral Responsibility: New Essays. Oxford Scholarship Online. Scanlon, T. M. (2015). “Forms and Conditions of Responsibility” in Clarke, R., McKenna, M., and Smith, A. M. (eds.) 2015. The Nature of Moral Responsibility: New Essays. Oxford Scholarship Online.
go back to reference Schulzke, M. (2012). Autonomous Weapons and Distributed Responsibility. Philosophy of Technology. 23: 203–219. Schulzke, M. (2012). Autonomous Weapons and Distributed Responsibility. Philosophy of Technology. 23: 203–219.
go back to reference Shockley, K. (2007). Programming collective control. Journal of Social Philosophy, 36, 442–445.CrossRef Shockley, K. (2007). Programming collective control. Journal of Social Philosophy, 36, 442–445.CrossRef
go back to reference Shoemaker, D. (2011). Psychopathy, responsibility, and the Moral/Conventional distinction. The Southern Journal of Philosophy, 49, 99–124.CrossRef Shoemaker, D. (2011). Psychopathy, responsibility, and the Moral/Conventional distinction. The Southern Journal of Philosophy, 49, 99–124.CrossRef
go back to reference Shoemaker, D. (2015). Responsibility from the margins. Oxford Scholarship Online. Shoemaker, D. (2015). Responsibility from the margins. Oxford Scholarship Online.
go back to reference Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef
go back to reference Strawson, H. F. (1962). Freedom and Resentment. Proceedings of the British Academy. 48: 1–21. Strawson, H. F. (1962). Freedom and Resentment. Proceedings of the British Academy. 48: 1–21.
go back to reference Strohmaier, D. (2020). Two theories of group agency. Philosophical Studies, 177, 1901–1918.CrossRef Strohmaier, D. (2020). Two theories of group agency. Philosophical Studies, 177, 1901–1918.CrossRef
go back to reference Sverdlik, S. (1987).Collective Responsibility. Philosophical Studies.51:61–76. Sverdlik, S. (1987).Collective Responsibility. Philosophical Studies.51:61–76.
go back to reference Tollefsen, D. (2003). Participant reactive attitudes and collective responsibility. Philosophical Explorations, 6, 218–234.CrossRef Tollefsen, D. (2003). Participant reactive attitudes and collective responsibility. Philosophical Explorations, 6, 218–234.CrossRef
go back to reference Tollefsen, D. (2015). Groups as agents. Cambridge: Polity. Tollefsen, D. (2015). Groups as agents. Cambridge: Polity.
go back to reference Van Rompaey, L. (2019). Shifting from Autonomous Weapons to Military Networks. Journal of International Humanitarian Legal Studies, 10, 111–128.CrossRef Van Rompaey, L. (2019). Shifting from Autonomous Weapons to Military Networks. Journal of International Humanitarian Legal Studies, 10, 111–128.CrossRef
go back to reference Walter, J. K., & Ross, L. F. (2014). Relational autonomy: moving beyond the limits of isolated individualism. Pediatrics, 133, S16–S23.CrossRef Walter, J. K., & Ross, L. F. (2014). Relational autonomy: moving beyond the limits of isolated individualism. Pediatrics, 133, S16–S23.CrossRef
go back to reference Watson, G. (2004). Agency and Answerability. Oxford: Oxford University Press.CrossRef Watson, G. (2004). Agency and Answerability. Oxford: Oxford University Press.CrossRef
Metadata
Title
Autonomous Military Systems: collective responsibility and distributed burdens
Author
Niël Henk Conradie
Publication date
01-03-2023
Publisher
Springer Netherlands
Published in
Ethics and Information Technology / Issue 1/2023
Print ISSN: 1388-1957
Electronic ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-023-09696-9

Other articles of this Issue 1/2023

Ethics and Information Technology 1/2023 Go to the issue

Premium Partner