Sparrow, Robert. “Robots and respect: Assessing the case against autonomous weapon systems.” Ethics & international affairs 30, no. 1 (2016): 93-116.
(2) Out of basic respect for enemy combatants and non-combatants alike, the legitimate use of any weapon requires that someone can be held responsible if wrongful harm arises as a result of its use.
Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).
Sparrow, Robert. “Killer robots.” Journal of applied philosophy 24, no. 1 (2007): 62-77.
2.No human intervention needed: An LAW-1 is capable of identifying targets and using lethal force without human intervention. For example, an unmanned aerial vehicle (UAV) that uses computer vision techniques to discern active combatants from non-combatants, then shoots the combatants with an attached gun without waiting for human approval would qualify as an LAW-1. An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.
I make two contentions about accidental war crimes caused by LAW-1s. Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers. As in other cases of negligence, it is appropriate to hold some human beings morally responsible for the results. For example, weapons company executives and/or military leadership could justifiably be imprisoned for some accidents. Secondly, the accidents which could not have been foreseen or prevented through sensible design practice do not give us special reason to dismiss LAW-1s. These accidents are not dissimilar from the misfiring of a gun, or human mistargeting of an unsophisticated missile.
Here is where the existence of a responsibility gap is most plausible. Sparrow argues that “the more the system is autonomous then the most it has the capacity to make choices other than those predicted or encouraged by its programmers. At some point then, it will no longer be possible to hold the programmers/designers responsible for outcomes that they could neither control or predict” (Sparrow 2006, 70).
Sparrow’s response to the charge that non-autonomous weapon-related unjust killings sometimes also have responsibility gaps is that “if the nature of a weapon, or other means of war fighting, is such that it is typically impossible to identify or hold individuals responsible for the casualties that it causes then it is contrary to [the] important requirement of jus in bello” (Sparrow 2007, 67). But I have argued that, at least for the LAW-1s currently being deployed and developed by the world’s militaries, the responsibility gap is far from typical. By this, I mean that the overall number of LAW-1-caused war crimes for which no one can be held morally responsible is plausibly smaller than Sparrow needs for his quoted response to be compelling.
To be clear, LAW-1s still identify and kill people without human intervention. There will likely always be a small risk of accidentally violating international law when using an LAW-1 even if no negligence is involved. But there is no morally relevant difference between this and a human keying in the wrong target for a missile accidentally, or even a gun misfiring and hurting a surrendered enemy combatant. If LAW-1s have a very high rate of accidental killings, then they should not be used, for the same reason that a very inaccurate missile should not be used. The degree of autonomy exhibited by a weapons system is only relevant insofar as it is correlated with the frequency of accidents; the responsibility gap is not a reason to discount the deployment of LAW-1s with low accident rates.
A body of machine learning research has identified, forewarned, and discussed these potential failure modes in detail.[2] I think it is reasonable to expect LAW-1 programmers to rigorously test their systems to ensure that the frequency of war crimes committed is exceedingly low. Sensible development of LAW-1s might involve intensive testing on representative datasets, early-stage deployments in real combat zones without weaponry to check if non-combatants can be consistently identified, etc. Techniques to solve the problem of misspecified goals (in this case, goals compatible with war crimes) continue to be developed (Ouyang et al. 2022). The comparatively specific objectives given to LAW-1s makes overcoming these technical challenges easier than for ML models given very general objectives. And, in the worst-case scenario, LAW-1s committing war crimes can be quickly recalled, and either decommissioned or improved to avoid recurrences.
Bibliography
Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv preprint arXiv:1606.06565 (2016).
There are some crimes, such as killing non-combatants and mutilating corpses, so vile that they are clearly impermissible even in the brutal chaos of war. Upholding human dignity, or whatever is left of it, in these situations may require us to hold someone morally responsible for violation of the rules of combat. Common sense morality dictates that we owe it to those unlawfully killed or injured to punish the people who carried out the atrocity. But what if the perpetrators weren’t people at all? Robert Sparrow argues that, when lethal autonomous weapons cause war crimes, it is often impossible to identify someone–man or machine–who can appropriately be held morally responsible (Sparrow 2007; Sparrow 2016). This might explain some of our ambivalence about the deployment of autonomous weapons, even if their use would replace human combatants who commit war crimes more frequently than their robotic counterparts.
When considering my arguments, it is prudent to think of why such accidents happen. Not all LAW-1s use machine learning (ML) techniques, but ML is widespread enough in tasks important for LAW-1s, such as computer vision, that it is worth exploring in some detail. In general, a machine learning-powered LAW-1 might fail because a) it is (accidentally) given a goal compatible with war crimes without robust constraints, and/or b) it fails at achieving its goal or staying within its constraints (e.g., misidentifying non-combatants as enemy combatants about to shoot friendly combatants).[1]
(1) There is a responsibility gap for some war crimes caused by lethal autonomous weapons, meaning that no one can be held morally responsible for the war crime.
This article received an honourable mention in the undergraduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by Tanae Rao, University of Oxford student
[1] The former category is not limited to models for which goals are misspecified; I intend for ‘inner alignment’ failures, also known as goal misgeneralisation, to be included as well (see Langosco et al. 2022).
Therefore, the responsibility gap is not a compelling reason to refrain from developing and deploying certain kinds of lethal autonomous weapons. In fact, the need to minimise accidents may justify more expenditure on developing LAW-1s to be as safe as is feasible. Additionally, further research should establish a clearer classification of the degree of autonomy displayed by different weapons systems, as is relevant to moral responsibility. Not all lethal autonomous weapons have the same ethical implications, and it is dangerous to be overly general in our conclusions about such a consequential subject.
Therefore, it is likely that, in many though not all circumstances, humans can be held morally responsible for war crimes caused by LAW-1s, even if no human explicitly intended for a war crime to be committed. In particular, programmers can be held responsible for not carefully checking for common failure modes, military officials can be held responsible for not sufficiently auditing the weapons they choose to deploy, and states can be held responsible for failing to regulate the development of faulty LAW-1s. I acknowledge that careful, rigorous checks might not currently be possible for LAW-1s, let alone more sophisticated lethal autonomous weapons. But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility. Perhaps the deployment of LAW-1s ought to be delayed until further progress on these technical problems is made, but this does not justify a complete ban.
Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.
Di Langosco, Lauro Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. “Goal misGeneralization in deep reinforcement learning.” In International Conference on Machine Learning, pp. 12004-12019. PMLR, 2022.
3.No mental states: An LAW-1 does not have mental states, such as pain or regret, and does not have subjective experiences. It is reasonable to believe that all weapons systems currently in operation fulfil this criterion.
I deny the existence of a responsibility gap for an LAW-1. Therefore, the focus of this essay is on the first premise of Sparrow’s argument. There are two reasons why an LAW-1 might commit a war crime. First, this might be intentionally programmed, in which case at least one human being is morally responsible. Second, if the war crime was not a result of human intention, human beings can often be held responsible for gross negligence. I concede that there will be a small number of freak accidents involving the use of LAW-1s for which no human can be held responsible but argue that these cases give us no special reason to reject LAW-1s as compared with less sophisticated weapons.
[2] See Amodei et al. 2016 for an overview of these research problems.
1.Moderate task specificity: An LAW-1 is a model trained to fulfil a relatively specific task, such as ‘fly around this area and kill any enemy combatants identified if and only if this is allowed under international law’. An example of a task too specific for a LAW-1 is ‘fly to these specific coordinates, then explode’ (this would be more akin to an unsophisticated missile, land mine, etc.). An example of a task too general is ‘perform tasks that will help our state win the war’. i. Humans develop and deploy an LAW-1 despite knowing that it will likely commit a war crime.
It should be uncontroversial that humans using an LAW-1 with the knowledge that it will likely commit war crimes are morally responsible for those crimes. For example, a human could knowingly train an LAW-1 with a reward function that incentivises killing non-combatants, even if killing non-combatants is not its explicit goal (e.g., the machine is trained to kill non-combatants that get in its way). The programmers of such a horrible weapon are morally responsible for the war crimes committed. If the military officials knew about its criminal programming, then they too would be morally responsible for the war crimes committed. Therefore, if humans knowingly deploy an LAW-1 that will commit war crimes, there is no responsibility gap.
(C) Therefore, we should not use lethal autonomous weapons during wartime.
I will now outline Sparrow’s argument that lethal autonomous weapons introduce a responsibility gap.
 Humans deploy an LAW-1, without knowing that it could commit a war crime.
This essay rejects Sparrow’s argument, at least as it applies to a wide class of lethal autonomous weapons I call ‘LAW-1’. When LAW-1s cause war crimes, then at least one human being can usually be held morally responsible. I acknowledge that there is a subset of accidents for which attributing moral responsibility is murkier, but they do not give us reason to refrain from using LAW-1s as compared with less sophisticated weapons like guns and missiles.
LAW-1s are the weapons systems that most people envision when imagining a lethal autonomous weapon. I predict that most systems developed in the next decade will be LAW-1s, although some may push the boundaries between LAW-1s and the next generation of lethal autonomous weapons. The defining characteristics of an LAW-1 are:
Despite being able to use lethal force without human intervention, LAW-1s are not so different with regards to the attribution of moral responsibility than a gun. Just as a gun might misfire, or a human being may accidentally (and understandably) misaim, LAW-1s might not fulfil the task intended by the humans developing and deploying them. If these accidents are just as infrequent as accidents caused by human combatants, then the existence of a responsibility gap does not give us compelling reason to abandon LAW-1s. As technology develops, it seems likely that accident rates will decrease to the point that LAW-1s are superior to human combatants. Clever programming can allow LAW-1s to escape the violence-inducing cognitive biases shown to be present in human militaries, intake and provide relevant information faster than humans, and ultimately render law-abiding decisions in chaotic situations (Arkin 2010).
Crucially, developers of LAW-1s need not be able to predict exactly how or why their machines will fail to be held morally responsible for their failure. As long as the LAW-1 committed a war crime as a result of a known failure mode (e.g., glitches in computer vision misclassifying non-combatants) that was not ruled out with a sufficient degree of confidence, developers (among others) can be held morally responsible. This is analogous to an unsophisticated missile whose faulty targeting system causes target coordinates to be miscommunicated, resulting in the accidental bombing of a hospital. The weapons manufacturer can plausibly be held morally responsible for not rigorously testing their product before selling it to the military.

Similar Posts