Skip to main content
Top
Published in: Ethics and Information Technology 3/2010

Open Access 01-09-2010

The cubicle warrior: the marionette of digitalized warfare

Authors: Lambèr Royakkers, Rinie van Est

Published in: Ethics and Information Technology | Issue 3/2010

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.

Introduction

In the last decade we have entered the era of remote controlled military technology: robot drones, mine detectors and sensing devices are employed on the battlefield but are controlled at a safe distance by humans. The aim of the deployment of these robots is to decrease the number of soldiers killed on the battlefield, to gain tactical and operational superiority, and to reduce emotional and traumatic stress among soldiers (Veruggio and Operto 2008). To illustrate, almost twenty percent of the soldiers returning from Iraq or Afghanistan have post-traumatic stress disorder or suffer from depression (cf. Tanielian and Jaycox 2008) causing a wave of suicide, particularly among American veterans that have fought in Afghanistan or Iraq. Since they can reduce stress, remote controlled devices could also be a way to foster more humane decision making by soldiers. It is well-known that in the heat of battle, the minds of soldiers can be become clouded with fear, anger or vengefulness, resulting in unethical behaviour or even war crimes.
A survey done by the US Army Surgeon General’s Office (2006) confirmed this picture. For example, less than half of soldiers and marines serving in Iraq said that non-combatants should be treated with dignity and respect, and seventeen per cent even held that all civilians should be treated as insurgents. Moreover, fewer than half of the soldiers would report a colleague for unethical battlefield behaviour. Finally, troops who were stressed, angry, anxious or mourning lost colleagues or who had handled the dead were more likely to say they had mistreated civilian non-combatants or violated ethical norms. Remote controlled robotic warfare thus might have a lot of advantages, as it distances soldiers from direct physical contact with some of the sources of this emotional stress.
The fact is that the deployment of military robots or unmanned semi-autonomous vehicles is growing rapidly. Presently, more than 17,000 military robots are active in the US military (cf. Singer 2009 and Krishnan 2009). Most of these robots are unarmed, and are mainly used for clearing improvised explosive devices and reconnaissance. However, over the last years the deployment of armed military robots is also on the increase. One of the most widely used unmanned combat aerial vehicles (UCAVs) is the Predator. This unmanned airplane, which can remain airborne for 24 h, is currently employed extensively in Afghanistan. The Predator drones can fire Hellfire missiles and are flown by pilots located at a military base in the Nevada desert, thousands of miles away from the battlefield. On top of this its successor, the Reaper, which may phase out the F-16, has already been spotted in Afghanistan. This machine can carry 5,000 lb of explosive devices, Hellfire missiles, or laser directed bombs, and uses day-and-night cameras to navigate through a sheet of clouds. This unmanned combat aerial vehicle is operated by two pilots located at a ground control station behind a computer at a safe distance from the war zone.
In 2007 the first armed unmanned ground vehicle, SWORDS (Special Weapons Observation Reconnaissance Detection System), was introduced on the battlefield in Iraq to patrol the streets of Baghdad. The SWORDS can be equipped with machine guns, grenade launchers, or anti-tank rocket launchers, and can hit ‘bulls eyes’ at 2,000 m. Its successor, the bigger and more heavily armed system, the MAARS (Modular Advanced Armed Robotic System), is already on the market. The SWORDS and the MAARS are both able to autonomously navigate towards specific targets through their global positioning systems, but the firing of weapons must be done by a human operator at a safe distance. Autonomous robots are high on the American military agenda, because they are more cost-effective and give a risk-free war. The United States plans to spend 4 billion dollar by 2010 on military robots (US Department of Defense 2009).
Not unexpectedly, the use of these armed military robots raises issues with respect to responsibility: Who can be held reasonably responsible for an atrocity that would normally be described as a war crime when it is caused by a military robot? This, in turn, goes back to the question of in what circumstances is there reasonable ground to hold an agent responsible for a certain outcome. Following the literature on responsibility, we will assume that an agent can only be reasonably held responsible if he or she has control over his or her behaviour and the resulting consequences (Fischer and Ravizza 1998, 13). This means that an agent can be considered responsible for a certain decision only if he or she made the decision voluntarily and knowingly. The condition of voluntariness means that an agent acting not in freedom cannot be held responsible, i.e., if an agent was coerced into doing an action, it is not reasonable to hold him or her responsible for this action or its consequences. The knowledge condition has a normative aspect, i.e., it relates to what people should know or can reasonably be expected to know with respect to the particular facts surrounding their decision or action.
In this article we will critically assess to which extent human operators or so-called ‘cubicle warriors’, computer operators who remotely control armed military robots, may or may not be reasonably held responsible for war crimes. We will explain the problems suggested by several authors of attributing responsibility with respect to the deployment of armed military robots. In the relevant literature, though, the role of the human operator is often underplayed. In our opinion, the main problem with regard to the attribution of responsibility lies first and foremost with the cubicle warrior himself. In the last two sections, we will elaborate on the position of the cubicle warrior as part of a complex socio-technical system, enabling soldiers to fight behind computer screens, far away from the actual battlefield, which leads to moral disengagement due to a loss of locus of control orientations of cubicle warriors. Furthermore, we discuss how the digitalization of warfare exhibits the danger of emotionally detaching moral action from moral awareness and reasoning, and to what extent the cubicle warrior can be reasonably held responsible for the decisions he makes nowadays and will make in the future.

Ascribing responsibility for the actions of military robots

The law of armed conflict, or jus in bello, which deals with the issue of allowed and prohibited practices in war, is mainly based on the principles of discrimination and proportionality (Walzer 1977). The principle of discrimination is concerned with avoiding deliberate attacks on ‘the innocent’; the idea that civilians should not be made to suffer in war. The principle of proportionality states that it is unjust to inflict greater harm than that which is unavoidable in order to achieve legitimate military objectives. These jus in bello principles are manifest in international humanitarian law and treaties banning, regulating or limiting the possession and use of particular forms of weaponry. According to Sparrow (2007), a fundamental condition of fighting a just war is that an individual person may be held responsible for civilian deaths in the course of it, and that this condition is one of the requirements of jus in bello 1 :
“The assumption and/or allocation of responsibility is also vital in order for the principles of jus in bello to take hold at all. The principle of discrimination, for instance, which requires that combatants distinguish between legitimate and illegitimate targets, assumes that we can specify who is responsible for attacks that may violate it. More generally, application of the principles of jus in bello requires that we can identify the persons responsible for the actions that these principles are intended to govern.”
Assuming that Sparrows’ claim is correct, the lethal application of military robots on the battlefield makes this attribution of responsibility problematic (see, e.g., Dennet 1997; Asaro 2007; Sparrow 2007; Sharkey 2008): Who should be held responsible for civilian causalities committed by military robots? To whom should we assign responsibility for improper conduct and unauthorized harm through the use of military robots (whether by error or intentional): is it the designer/programmer, the field commander, the robot manufacturer, the robot controller/supervisor, or the nation that commissioned the robot?
This problem of the attribution of responsibility is morally problematic for at least two reasons. The first reason is that many people, especially victims and members of the public but often also members of the military community, find it morally unsatisfactory that if a civilian causality occurs nobody can be held responsible. Of course, this search for somebody to blame may be misperceived, but at least in situations with civilian causalities it seems reasonable to say that somebody should bear responsibility. The second reason is the desire to learn from mistakes, to do better in the future and to achieve a certain result (Van de Poel and Royakkers forthcoming). If nobody is held responsible, this may not happen.
In spite of these two reasons, the claim of Sparrow—that it is a fundamental condition of fighting a just war that an individual person may be held responsible for civilian deaths in the course of it—is disputable. We do not normally say that whether or not someone may be held responsible after the fact that a civilian causality occurs is a condition of permissibility (cf. Scanlon 2008), at least in the absence of a clear rule to the contrary (e.g. “you may use the playground as long as you designate someone who is in charge of putting everything in place at the end”).2 Especially in the military operational context, which is highly complex and dynamic, we often have to deal with the absence of clear rules.
According to some other authors (e.g., Krishnan 2009; Asaro 2007), there is no real vacuum of policies surrounding military robots with respect to the attribution of responsibility, since there are already relevant legal and ethical concepts that can properly deal with this. For example, Krishnan (2009, 105) ends his discussion on the attribution of responsibility as follows:
“It appears that the legal problems with regard to accountability might be smaller than some critics of military robots believe. The chain of command is not interrupted by deploying autonomous systems on the battlefield. (…) If the robot does not operate within the boundaries of its specified parameters, it is the manufacturer’s fault. If the robot is used in circumstances that make its use illegal, then it is the commander’s fault.”
In this paper we aim not to elaborate on the above discussion on the attribution of responsibility. Instead, we will focus on an important issue that is often lacking from the discussion on the attribution of responsibility, namely the responsibility of the human operator, or cubicle warrior. At the moment, most military robots are tele-operated, and are incapable of firing their weapons without being controlled by a human operator. So, the decision to open fire, or to take action that could threaten human life, is considered and approved by a human operator. According to Sparrow (2007), the requirement that human operators approve any decision to use lethal force will avoid the problems with respect to the attribution of responsibility. It is true that human operators can in general be held responsible for the decisions they make, however, this will be problematic in those circumstances in which they do not have control over their behaviour and the resulting consequences. This will be the topic of the next section.

Moral disengagement of the cubicle warrior

Many technological developments in the past, from the slingshot and cannon to the bomber, have increased the physical and emotional distance between soldiers and their enemies. Unmanned robotic systems represent again another step further in the process of physically and psychologically detaching soldiers from the actual war scene. For cubicle warriors the decision-making context differs strongly from that of soldiers in combat. Cubicle warriors operate from behind computer screens, physically far away from the battlefield. This means that they are safe in a physical sense; they cannot be wounded. As a consequence, cubicle warriors do not feel any fear. Nowadays a cubicle warrior finds himself in a unique situation: on the one hand, the socio-technical system enables him to fight the war from a remote place, on the other hand the same system connects the soldier to the war zone—in a virtual manner that is—thus enabling some form of tele-presence.
Remote controlled warfare, therefore, presents a step further in a process of what Borgmann (1984) names ‘moral commodification’. For a soldier in combat fighting an enemy is something that costs a lot of effort, literally ‘blood, sweat and tears’. Remote control warfare has got rid of the ‘blood and sweat’. The cubicle warrior can kill an enemy by pushing a button. Remote control war has also removed some of the ‘tears’ normally involved in killing people. Fighting from behind a computer is not as emotionally potent as fighting on the battle field.
The convergence of interfaces used in computer games and military robotics also seems to increase the emotional distance from the enemy (cf. Bauman 1997). To illustrate, the military is currently using computer games to recruit and train soldiers, quite likely including future cubicle warriors. For newly recruited soldiers, who have been playing videogames throughout their teenage years, there might not be a huge contrast between the experience of playing a video game and that of actually being a cubicle warrior. In Wired for War, Singer quotes a young pilot who operates drones over Iraq and Afghanistan, and describes how he experiences fighting from a cubicle: “It’s like a video game. It can get a little bloodthirsty. But it’s fucking cool” (Singer 2009, 308–309). Cubicle warriors would then be conditioned to dehumanize the enemy, to view them as sub-humans or non-humans, so that it is easier to kill. It splits means and ends: cubicle warriors lose sight of means and their ethical implications and start concentrating only on the ends or outcomes. According to Bandura (1986), dehumanization by reducing identification with the targets is a mechanism of moral disengagement. Research has consistently shown that dehumanization disengages moral decisions (see, e.g., Detert et al. 2008). Moral disengagement disconnects a contemplated act from the guilt or self-censure that would otherwise prevent it, and explains why otherwise normal people are able to engage in unethical behaviour without apparent guilt or self-censure (Bandura 1986). Empirical evidence supports that moral disengagement leads to unethical behaviour. For example, McAlister (2001) found moral disengagement to be positively related to the choice for military attacks against Iraq and Yugoslavia, and Aquino et al. (2007) found moral disengagement to be positively related to the choice of death rather than non-lethal options for the perpetrators of the 9/11 attacks.
According to Grossman (1996), this moral disengagement from destructive and lethal actions reduces, or neutralizes, the soldier’s inhibition to kill. A cubicle warrior illustrates this: “The truth is, it wasn’t all I thought it was cracked up to be. I mean, I thought killing somebody would be this life-changing experience. And then I did it, and I was like ‘All right, whatever.’ (…) Killing people is like squashing an ant. I mean, you kill somebody and it’s like ‘All right, let’s go get some pizza’” (Singer 2009, 391–392). So, the danger is that this makes some cubicle warriors too relaxed, too unaffected by killing, and makes them do things that they would never do if they were there in person on the battlefield.
Creating moral disengagement reduces or even eliminates the stress of cubicle warriors. Moral disengagement, however, also limits the cubicle warrior to reflect on his decisions and thus to become fully aware of the consequences of his decisions. Instead, cubicle warriors are focussed on the outcome, for example, the targeting of the blips on a screen (not fully consciously aware that these blips are human beings). The depersonalization of war caused by the dehumanizing of the enemy means that cubicle warriors cannot be held reasonably responsible for the decisions they make, since the knowledge condition is not fulfilled. This condition, namely, requires that the cubicle warrior is fully aware of the consequences of his decisions. This condition is not fulfilled if the depersonalization of war by dehumanizing the enemy incites cubicle warriors to subconsciously believe that they are playing a video game. Consequently, cubicle warriors neither are able to reliably identify targets, nor are they able to comprehend what happens to the targets when lethal force is deployed (Sparrow 2009), a condition which is necessary to hold someone reasonably responsible.

Locus of control

The emotional and moral disengagement of the cubicle warrior may increase in the future, due to a noticeable shift from controlling to monitoring. Currently, the cubicle warrior controls the situation, i.e., he provides or assigns tasks or brings changes and verifies the robot’s execution to meet the requirements. In this section, we will discuss three factors that have an impact on the shift in the near future: (1) photo shopping the war; (2) the moralization of technology; and (3) the speed of decision-making. Through these factors, his future role may be restricted to monitoring, meaning that the cubicle warrior keeps an eye on the process and only interferes if something goes wrong. This is related with locus of control, a term in the psychology. Locus of control refers to the extent to which individuals believe that they can control outcomes. Treviño and Youngblood (1990) have shown that there is a link between the locus of control and moral decision-making. They argue that those who see a clear connection between their own behaviour and its outcomes would be more likely to take responsibility for that behaviour (see also Levenson 1981; Rotter 1966). In turn, people who believe that they have little personal control in certain situations—such as monitoring—are particularly likely to go along with rules, decisions and situations even if they are unethical or have harmful effects (cf. Detert et al. 2008). This research shows that the shift from controlling to monitoring can lead to moral disengagement, and may, therefore, result in unethical behaviour. For our case, this would imply that we cannot hold a cubicle warrior reasonably responsible for his decisions anymore if he has no real control over the outcomes. It is not the cubicle warrior that takes the decisions, but a military robot that takes them.

Photo shopping the war

Although fighting from behind a computer is not as emotionally potent as being on the battle field, pushing a button to kill someone can still be a stressful job. Various studies have reported physical and emotional fatigue and increased tensions in the private lives of American virtual soldiers that operate the Predator drones in Iraq and Afghanistan (Donnelly 2005; Kaplan 2006). First of all, cubicle warriors can be touched emotionally and psychologically by the things they see on screen. For example, a cubicle warrior may witness a massacre by terrorists, yet finds himself in a situation in which he is helpless to prevent it, or he may see how civilians are killed in a cruel way by his actions. This is certainly not a hypothetical situation. Local authorities in Pakistan claim that near the Afghan border drone strikes on Al Qaeda and affiliate targets have killed at least 687 civilians (Mir 2009). A second factor that increases stress is the fact that the use of remote controlled military robotics causes operators to live in two worlds at the same time: both a ‘normal’ life in the civil world, and a virtual life of combat. As a result, these virtual warriors constantly experience radical shifts in contexts: from battlefield to private family life. As a cubicle warrior describes vividly: “You are going to war for 12 h, shooting weapons at targets, directing kills on enemy combatants and then you get in the car, drive home and within 20 min you are sitting at the dinner table talking to your kids about their homework” (Horton 2009).
This problem of ‘residual’ stress of cubicle warriors has led to proposals to reduce such stress. In particular, the visual interface can play an important role in reducing stress. Interfaces that only show abstract and indirect images of the battlefield will probably cause less stress than the more advanced real images (Singer 2009). Let us reflect on this proposal. A first remark should be that this proposal seems technically feasible. The war scene has already been digitized and encoded. From a technical perspective it is not hard to digitally recode the war scene, so it induces less moral discomfort with the war operator. Such ‘photo shopping’ of the war, however, raises some serious ethical issues.
Showing abstract images would dehumanize the enemy, and as a result would desensitize, and thus dehumanize, the cubicle warrior even further. In this case, it is no longer the real war that is numbing the soldier, but the digital recoding of that war. The depersonalization of war can even go as far that the cubicle warrior no longer would be aware of the fact that he is actually involved in a real war. In the current situation it is already hard for a cubicle warrior to distinguish between a video war game and operating a drone. A next step would be to let a cubicle warrior think he is playing a computer game, and destroying enemy ‘avatars’, while he is actually killing real people at the other side of the globe. From a technological perspective this seems only a minor step. However, from an ethical point of view this would mean a radical change. In this situation the human warrior would be both physically and emotionally totally detached from his actions, which leads to a decrease of the locus of control orientations (cf. Detert et al. 2008). In such a horror scenario military robotics has changed the human soldier into a military robot, not bothered by any moral sense of guilt or responsibility at all.

The moralization of technology

The unmanned combat aerial vehicles (UCAVs) connect the cubicle warriors with the war zone; they are the eyes of the tele-soldier. Semi-autonomous military robots, like the Predator, can precisely determine a certain target and send the GPS-coordinates and camera images back to the operator. (The American global navigation satellite system, GPS, consisting of 24 to 32 satellites, therefore, is also part of the socio-technical system we are describing.) Based on the information projected on his computer screen the cubicle warrior has to decide, for example whether or not to launch a missile. His decision is mediated by a computer-aided diagnosis of the war situation. Future military robots will have built into their design ethical constraints, the so-called ‘ethical governor’ which will suppress unethical lethal behaviour. Although these ethical governors are not very sophisticated yet, current research shows some major progress in this development. For example, Arkin (2007) has done research—sponsored by the US Army—to create a mathematical decision mechanism consisting of constraints represented as prohibitions and obligations derived directly from the laws of war. Moreover, a future goal is that military robots can refuse orders of a cubicle warrior which according to the ethical governor are illegal or unethical. For example, a military robot might advise a cubicle warrior not to push the button and shoot because the diagnosis of the camera images tells the operator he is about to attack non-combatants, i.e., the software of the military robot that diagnoses the war situation provides the cubicle warrior with ethical advice. An ethical governor helps to shape moral decision-making. This is a development which Achterhuis (1998) has called the ‘moralization of technology’. In our case this would mean that besides moralizing the cubicle warrior (‘do not shoot non-combatants’), we should also moralize his material environment. In other words, the task to see to it that no Rules of Engagement3 are violated, could be delegated to a military robot (cf. Verbeek 2005). Achterhuis’ plea for a moralization of technology has received severe criticism. The main criticism is that human freedom is affected when human actions are explicitly and consciously steered with the help of technology. A consequence is that humans then simply show a type of behaviour that was desired by the designers of the technology instead of explicitly choosing to act this way. According to Cumming (2006), this will also be the case with ethical governors, since an ethical governor may form a ‘moral buffer’ between cubicle warriors and their actions, allowing them to tell themselves that the military robot has taken the decision.
The consequence of the moralization of military robots is that the decision of a cubicle warrior is not the result of moral reflection, but is mainly determined or even enforced by a military robot. In other words, the decisions of cubicle warriors are not made in complete freedom. A consequence is that cubicle warriors may come to over rely on military robots (Cumming 2006), and only monitoring the situation instead of controlling. This challenges the question whether we can hold cubicle warriors reasonably responsible for a robot’s lethal mishap.

The speed of decision-making

Soldiers in combat often have to decide in a fraction of a second what kind of action to undertake. For example, they have to decide who is a combatant and who is a civilian, whether or not engaging a combatant may endanger too many civilians, or whether or not a combatant wants to surrender. In these types of war the chances of getting injured or killed are high. Moreover, as indicated in the introduction, soldiers experience a lot of stress, which may impede their moral decision-making capability.
The US Army emphasizes the need for greater speed in military operations (US Department of Defense 2009). It is, however, recognized that the desire for speed in decision-making has to be limited by the time needed for human reflection; as the USS Vincennes attack on an civilian Iranian airliner in 1988 suggested, automated systems may encourage or even force a decision before commanders are ready (Gruner 1990). There is nothing new, of course, about military personnel having to take important decisions quickly. For example, to reduce uncertainty and simplify the decision-making process individual soldiers or fighter pilots normally operate under Rules of Engagement. However, military robots will place an even greater emphasis on operating faster by decreasing decision-cycle time since military robots can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time (Arkin 2007). The cubicle warrior will, as a result of this integration of information, be faced with an overly ‘clean’ picture of the situation on his screen, which he has to translate immediately into actionable knowledge. Research has shown that people make less accurate decisions in an immediate judgement situation when presented with indirectly obtained information (Ham and van den Bos 2010). The abstract and indirect images of the battlefield on the screen of the cubicle warriors can be considered as indirectly obtained information. Directly obtained information, through own observations such as real life images on the screen may lead to more accurate decisions in situations where one must make immediate decisions.
Furthermore, the cubicle warrior will have little idea what information has gone into the overly ‘clean’ picture, how reliable it is, what items of information may have been combined with others, what information may have been discarded, and so on. Therefore, this information he will receive might be unreliable, impossible to double-check by the cubicle warrior. A consequence of this will be the possibility of an over-reliance on an erroneous abstract picture that is neither truly shared nor sufficiently representative of reality. The result may be that we could “shoot first and ask questions later” (Barnett 1999, 38). So, the cubicle warrior will have no personal control over the outcome, since he cannot provide or assign tasks or bringing changes and verify military robot’s execution to meet the requirements. This implies moral disengagement, and so increases unethical behaviour (Detert et al. 2008). In brief, a cubicle warrior will have no control over his decision, since he cannot know what the consequences of his decision are, based on the overly ‘clean’ picture provided by a military robot, and not on the result of adequate deliberation. Therefore, we then cannot hold a cubicle warrior reasonably responsible for a robot’s lethal mishap.

Conclusions

Most military robots currently find their applications in surveillance, reconnaissance, and the location and destruction of mines and improvised explosive devices. These robots are unarmed, harm no one, and save lives. But not all military robots are unarmed; different types of armed military robots are currently used on the battlefield. An important ethical problem involves the assignment of responsibility in the long causal chain associated with the design and deployment of armed military robots, which stretches across the spectrum from the manufacturer, programmer, designer, departments of defence, commanding commander, to the cubicle warrior. This responsibility gap is the main reason why lawyers and ethicists argue that the man should be in-the-loop, so that traditional accountability can be ensured, and that this would therefore avoid the problem of the allocation of responsibility.
In this paper, we have discussed the neglected problem in the literature of the attribution of responsibility to cubicle warriors. Since the cubicle warrior is the one who decides to use lethal force, the attribution of responsibility seems to be guaranteed. We have shown, however, that even though we can in general hold the cubicle warrior responsible for his decisions, we cannot hold the cubicle warrior reasonably responsible. The depersonalization of war by dehumanizing the enemy leads to a loss of locus of control orientations of cubicle warriors, and therefore to moral disengagement of cubicle warriors. We have discussed three factors that emotionally and morally disengage the cubicle warrior even further in the near future: (1) photo shopping the war; (2) the moralization of technology; and, (3) the speed of decision-making. A positive side of this disengagement is that it reduces the psychological stress among cubicle warriors, who are simultaneously ‘present’ in and absent from the battlefield. Unfortunately, this disengagement also limits, or even eliminates, proper reflection among cubicle warriors on the life-and-death decisions they make. Consequently, cubicle warriors have lost control over their decisions. They are actually dehumanized, and have become marionettes of digitalized warfare. If we want to hold cubicle warriors reasonably responsible, it is essential that they have real control over their decisions by having a vivid awareness of what is at stake to make deliberate, and thus truly responsible, decisions.
An appropriate solution needs to strike a proper balance between emotional and moral attachment and detachment. This requires ethical design of the computer systems used by the cubicle warriors to make life-and-death decisions. Such systems should both communicate the moral reality of the consequences of the decisions of cubicle warriors, as well as reduce the strong emotions cubicle warriors feel, in order to reduce the number of war crimes (Sparrow 2009). Developing such systems is a real challenge. More social and psychological research on moral disengagement is necessary with respect to cubicle warriors. Results of existing psychological research suggest the potential value of further efforts to better understand moral disengagement processes. Understanding moral disengagement processes is extremely important in the case of designing computer systems for cubicle warriors. This may lead to the development, testing and implementation of interventions that might counter the negative effects of moral disengagement processes induced by computer systems.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Open AccessThis is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://​creativecommons.​org/​licenses/​by-nc/​2.​0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Footnotes
1
See also Fieser and Dowden (2007).
 
2
Thanks to the reviewer, who made this point.
 
3
Rules of Engagements compromise directives issued by competent military authorities that delineate both the circumstances and the restraints under which combat with opposing forces is joined.
 
Literature
go back to reference Achterhuis, H. (1998). De Erfenis van de Utopie. Amsterdam: Ambo. Achterhuis, H. (1998). De Erfenis van de Utopie. Amsterdam: Ambo.
go back to reference Aquino, K., Reed, A., Thau, S., & Freeman, D. (2007). A grotesque and dark beauty: How moral identity and mechanisms of moral disengagement influences cognitive and emotional reactions to war. Journal of Experimental Social Psychology, 43, 385–392.CrossRef Aquino, K., Reed, A., Thau, S., & Freeman, D. (2007). A grotesque and dark beauty: How moral identity and mechanisms of moral disengagement influences cognitive and emotional reactions to war. Journal of Experimental Social Psychology, 43, 385–392.CrossRef
go back to reference Arkin, R. C. (2007). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture (Technical report GIT-GVU-07–11). Atlanta: Georgia Institute of Technology. Arkin, R. C. (2007). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture (Technical report GIT-GVU-07–11). Atlanta: Georgia Institute of Technology.
go back to reference Asaro, P. (2007). Robots and responsibility from a legal perspective. In: Proceedings of 8th IEEE 2007 International Conference on Robotics and Automation. Workshop on RoboEthics, Rome, 14 April 2007. Asaro, P. (2007). Robots and responsibility from a legal perspective. In: Proceedings of 8th IEEE 2007 International Conference on Robotics and Automation. Workshop on RoboEthics, Rome, 14 April 2007.
go back to reference Bandura, A. (1986). Social foundations of thought and actions: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall. Bandura, A. (1986). Social foundations of thought and actions: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
go back to reference Barnett, T. P. M. (1999). The seven deadly sins of network-centric warfare. US Naval Institute Proceedings (January), pp. 36–39. Barnett, T. P. M. (1999). The seven deadly sins of network-centric warfare. US Naval Institute Proceedings (January), pp. 36–39.
go back to reference Bauman, Z. (1997). Life in fragments: Essays in postmodern moralities. Oxford: Blackwell. Bauman, Z. (1997). Life in fragments: Essays in postmodern moralities. Oxford: Blackwell.
go back to reference Borgmann, A. (1984). Technology and the character of contemporary life. Chicago: University of Chicago Press. Borgmann, A. (1984). Technology and the character of contemporary life. Chicago: University of Chicago Press.
go back to reference Cumming, M. L. (2006). Automation and accountability in decision support system interface design. Journal of Technology Studies, 32(1), 23–31. Cumming, M. L. (2006). Automation and accountability in decision support system interface design. Journal of Technology Studies, 32(1), 23–31.
go back to reference Dennet, D. C. (1997). When HAL kills, Who’s to blame? Computer ethics. In D. G. Stork (Ed.), HAL’s legacy: 2001’s computer as dream and reality (pp. 351–365). Cambridge, Mass: The MIT Press. Dennet, D. C. (1997). When HAL kills, Who’s to blame? Computer ethics. In D. G. Stork (Ed.), HAL’s legacy: 2001’s computer as dream and reality (pp. 351–365). Cambridge, Mass: The MIT Press.
go back to reference Detert, J. R., Treviño, L. K., & Sweitzer, V. L. (2008). Moral disengagement in ethical decision making: A study of antecedents and outcomes. Journal of Applied Psychology, 93(2), 374–391.CrossRef Detert, J. R., Treviño, L. K., & Sweitzer, V. L. (2008). Moral disengagement in ethical decision making: A study of antecedents and outcomes. Journal of Applied Psychology, 93(2), 374–391.CrossRef
go back to reference Donnelly, S. B. (2005). Long-distance warriors. Time Magazine (4 December). Donnelly, S. B. (2005). Long-distance warriors. Time Magazine (4 December).
go back to reference Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press. Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press.
go back to reference Grossman, D. (1996). On killing: The psychological cost of learning to kill in war and society. New York: Black Bay Books. Grossman, D. (1996). On killing: The psychological cost of learning to kill in war and society. New York: Black Bay Books.
go back to reference Gruner, W. P. (1990). No time for decision making. US Naval Institute Proceedings (January), pp. 39–41. Gruner, W. P. (1990). No time for decision making. US Naval Institute Proceedings (January), pp. 39–41.
go back to reference Ham, J., & van den Bos, K. (2010). The merits of unconscious processing of directly and indirectly obtained information about social justice. Social Cognition, 28, 180–191.CrossRef Ham, J., & van den Bos, K. (2010). The merits of unconscious processing of directly and indirectly obtained information about social justice. Social Cognition, 28, 180–191.CrossRef
go back to reference Kaplan, R. D. (2006). Hunting the Taliban in Las Vegas. Atlantic Monthly 4 (August). Kaplan, R. D. (2006). Hunting the Taliban in Las Vegas. Atlantic Monthly 4 (August).
go back to reference Krishnan, A. (2009). Killer robots. Legality and ethicality of autonomous weapons. Farnham: Ashgate Publishing Limited. Krishnan, A. (2009). Killer robots. Legality and ethicality of autonomous weapons. Farnham: Ashgate Publishing Limited.
go back to reference Levenson, H. (1981). Differentiating among internality, powerful others, and chance. In H. M. Lefcourt (Ed.), Research with the locus of control construct: Vol.1. Assessment Methods (pp. 15–63). New York: Academic Press. Levenson, H. (1981). Differentiating among internality, powerful others, and chance. In H. M. Lefcourt (Ed.), Research with the locus of control construct: Vol.1. Assessment Methods (pp. 15–63). New York: Academic Press.
go back to reference McAlister, A. L. (2001). Moral disengagement: Measurement and modification. Journal of Peace Research, 38, 87–99.CrossRef McAlister, A. L. (2001). Moral disengagement: Measurement and modification. Journal of Peace Research, 38, 87–99.CrossRef
go back to reference Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80, 1–28. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80, 1–28.
go back to reference Scanlon, T. M. (2008). Moral dimensions: Permissibility, meaning, blame. Cambridge: Harvard University Press. Scanlon, T. M. (2008). Moral dimensions: Permissibility, meaning, blame. Cambridge: Harvard University Press.
go back to reference Sharkey, N. (2008). Cassandra or false prophet of doom: AI robots and war. IEEE Intelligent Systems, 23(4), 14–17. Sharkey, N. (2008). Cassandra or false prophet of doom: AI robots and war. IEEE Intelligent Systems, 23(4), 14–17.
go back to reference Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: The Penguin Press. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: The Penguin Press.
go back to reference Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef
go back to reference Sparrow, R. (2009). Building a better warbot: Ethical issues in the design of unmanned systems for military application. Science and Engineering Ethics, 15(2), 169–187.CrossRefMathSciNet Sparrow, R. (2009). Building a better warbot: Ethical issues in the design of unmanned systems for military application. Science and Engineering Ethics, 15(2), 169–187.CrossRefMathSciNet
go back to reference Tanielian, T., & Jaycox, L. H. (Eds.). (2008). Invisible wounds of war: Psychological and cognitive injuries, their consequences, and services to assist recovery. Santa Monica, CA: RAND Corporation. Tanielian, T., & Jaycox, L. H. (Eds.). (2008). Invisible wounds of war: Psychological and cognitive injuries, their consequences, and services to assist recovery. Santa Monica, CA: RAND Corporation.
go back to reference Treviño, L. K., & Youngblood, S. A. (1990). Bad apples in bad barrels: A causal analysis of ethical decision-making behavior. Journal of Applied Psychology, 74, 378–385.CrossRef Treviño, L. K., & Youngblood, S. A. (1990). Bad apples in bad barrels: A causal analysis of ethical decision-making behavior. Journal of Applied Psychology, 74, 378–385.CrossRef
go back to reference US Department of Defense. (2009). Unmanned Systems Roadmap 2009–2034. Washington: Government Printing Office. US Department of Defense. (2009). Unmanned Systems Roadmap 2009–2034. Washington: Government Printing Office.
go back to reference Van de Poel, I. R., & Royakkers, L. M. M. (forthcoming). Ethics, technology and engineering. Oxford: Blackwell. Van de Poel, I. R., & Royakkers, L. M. M. (forthcoming). Ethics, technology and engineering. Oxford: Blackwell.
go back to reference Verbeek, P. P. (2005). What things do: Philosophical reflections on. Technology, agency, and design. University Park: Pennsylvania State University Press. Verbeek, P. P. (2005). What things do: Philosophical reflections on. Technology, agency, and design. University Park: Pennsylvania State University Press.
go back to reference Veruggio, G., & Operto, F. (2008). Roboethics: Social and ethical implications of robotics. In B. Siciliano & O. Khatib (Eds.), Springer handbook of robotics (pp. 1499–1524). Berlin: Springer.CrossRef Veruggio, G., & Operto, F. (2008). Roboethics: Social and ethical implications of robotics. In B. Siciliano & O. Khatib (Eds.), Springer handbook of robotics (pp. 1499–1524). Berlin: Springer.CrossRef
go back to reference Walzer, M. (1977). Just and unjust wars: A moral argument with historical illustrations. Ney York: Basic Books. Walzer, M. (1977). Just and unjust wars: A moral argument with historical illustrations. Ney York: Basic Books.
Metadata
Title
The cubicle warrior: the marionette of digitalized warfare
Authors
Lambèr Royakkers
Rinie van Est
Publication date
01-09-2010
Publisher
Springer Netherlands
Published in
Ethics and Information Technology / Issue 3/2010
Print ISSN: 1388-1957
Electronic ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-010-9240-8

Other articles of this Issue 3/2010

Ethics and Information Technology 3/2010 Go to the issue

EditorialNotes

Editorial

Premium Partner