Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKDiss.com.
The development of weapons for military use is moving at a considerable pace, and is getting faster and faster each day. Nations and armies across the globe are already using an array of autonomous weapons for different purposes. Bomb disposal robots are becoming more advanced and better equipped for dealing with explosives, whilst self-driving vehicles are reducing the risk of harm to foot soldiers on missions. At this rate, it is only a matter of time before these machines are developed to become ‘fully autonomous’. They will be able to independently carry out tasks on the battlefield without the input of humans and pre-determined programming. One type of autonomous machine that has been discussed is a ‘Lethal Autonomous Robot (LAR)’. Designed to replace human soldiers in the battlefield, these robots will be able to independently target and engage enemy soldiers in combat.
The push for developing LARs in the future is a classic example of the consequentialist approach to war. To reduce human suffering and provide the best possible outcomes on the battlefield, consequentialists would argue in favour of using LARs. In this paper, I will refute the consequentialist argument for using LAWs in war by showing that it fails to address the issue of ascribing moral responsibility. The main opponents to Arkin’s paper are Johnson and Axinn, who provide two main claims.
In this paper, I will refute the consequentialist argument for using LAWs in war by showing that it fails to address the issue of ascribing moral responsibility. The main opponents to Arkin’s paper are Johnson and Axinn, who provide two main claims. The first being that killing in war should not be carried out by LARS, due to the need for human emotions for making decisions involving killing. The second claim focuses on the difficulty of associating blame to a LAR when it carries out military actions. A
human soldier can be held responsible if he kills on the battlefield, as he can target enemy soldiers and understand that they fit specific criteria in which he can engage. If a human soldier can be ascribed moral responsibility, can the same be said for a LAR? The machine will have been constructed to make independent decisions and can kill if the mission objective requires it. However, as there is no human input present in the decision (targeting), there are certain criteria that the LAR cannot fulfil if it is to be ascribed full moral responsibility.
I will evaluate two responses to my argument, both which attempt to provide cases in which a LAR could be ascribed moral responsibility. There is the argument put forward by Dennett, who believes LARs could potentially possess an artificial intelligence that could match a human moral intelligence. This would result in moral responsibility being ascribed. On the other hand, without the means to create a machine this advanced, we cannot accept this as a viable option. Marco Sassoli attempts to attribute moral responsibility to the human commanding officers and engineers, claiming that there will always be a human element involved and the debate involving LARs is irrelevant. This does little to satisfy the need for a morally responsible agent who carries out the action with the moral result. LARs cannot satisfy this criterion, and therefore cannot be ascribed moral responsibility.
In short, it seems that consequentialists believe that LARs would be a better alternative to using human soldiers on the battlefield. However, this can only be the case if we can attribute blame to LARs for their actions on the battlefield. If we cannot find a means to do this, we cannot claim that they will bring about better outcomes in war. Furthermore, it seems to be morally wrong to consider using LARs in a military capacity if they cannot be held morally accountable for targeting and killing enemy soldiers.
It is common knowledge that modern warfare is in constant flux regarding technology. In the aftermath of the Second World War and during the Cold War (Arms Race), there became an urgency for state militaries to develop advanced weapons. The basic reasoning behind this involved the lessening of human suffering and obtaining an advantage of enemies. Today, the basic motives for advanced weapons remain intact. However, new discoveries in advanced weaponry and adventurous militaries have brought about the need for more sophisticated military weapons. This has led to significant advances involving ‘autonomous’ machines. Different from automatic machines, autonomous machines have varying levels of input. This can vary from a human controller dictating every action of an autonomous machine, to a machine that can independently make decisions on the battlefield. There are already many examples of autonomous machines being used in modern warfare. Drones, bomb disposal robots and self-driving vehicles are used on a day-to-day basis, and are constantly being improved. One of the main concerns involves the development of fully autonomous machines. This has led to debate over the potential development of ‘Lethal Autonomous Robots’ (LARs) in war. A LAR can be described as an autonomous machine that can independently target and engage with enemy soldiers on the battlefield. The machine does not have to rely on a human operator to carry out functions or make decisions on the battlefield.
As a relatively theoretical field, there are differing definitions of what these LARs might look like. Scharre and Horowitz (2015) provide three distinct categories in which certain autonomous machines fall. When making these definitions, they do so by describing the levels of independence the machines have in making decisions involving targeting and using lethal force on targets. The final type of autonomous weapon (Fully autonomous robots/Lethal Autonomous Weapons) is the focus of this dissertation, and is the main source of worry when thinking about ethics in an advancing era of warfare.
The first level of autonomy are machines that involve a “human in-the-loop” (Scharre and Horowitz, 2015, pp. 8).These machines work via a remote-controlled system, depending on a human input to carry out specific tasks. In other words, a human driver is in control of the machine and the robot makes no independent decisions. Additionally, there is a clear level of responsibility attached to the human driver for the actions of the machine. For instance, if the machine accidently killed non-combatants on the battlefield, there is an obvious level of blame associated to a human input. With a drone pilot, there is clear intent in who the pilot wants to target. This means that there is a pre-determined target to be engaged when using the machine.
These types of robots are being used regularly in modern warfare today. These can range from surveillance robots (UAV) to bomb disposal robots used to clear potential mines and threats to military convoys. Military drones are a common example of these types of autonomous robots. A human pilot is not in the actual machine, but controls the aerial vehicle from a distance via remote control. The drones can also be pre-programmed and work with some level of autonomy. For instance, they can possess a decision-making process which involves tracking and navigation. However, actions regarding targeting enemies and firing shots is down to the human pilot and their decision-making ability. The pilot is very much in control when it comes to identifying enemies and pulling the trigger if necessary. This is lower end of the autonomous scale in terms of independent decision-making.
The second level described by Scharre and Horowitz (2015) are semi-autonomous robots. Unlike pre-programmed robots, this machine can carry out most actions on its own accord. However, when faced with unexpected situations, a “human-on-the-loop” (pp. 12) would be required to respond to the situation. The robot can make decisions on who to target and engage, but a human programmer can intervene if required. There is still a significant level of responsibility attributed to the programmer. Even though the machine can make most decisions independently, there is still a level of human input involved in that there is a clear idea of who or what will be targeted. In this instance, a semi-autonomous robot follows pre-set instructions and works through a form of programming; ‘If X occurs, you will respond by doing Y’. The programmer of the machine has already made a pre-determined list of who is likely to be targeted, and is aware of the consequences if they are targeted by the machine.
These supervised robots are used regularly today and involve a range of different uses. The most common examples are evident in Automatic Weapon Defence Systems (AWDS). These systems are manufactured to monitor potential threats and respond automatically to prevent incoming missile attacks. An example of a successful AWDS is the Iron Dome in Israel. As a response to the Middle Eastern violence, this weapons system was manufactured to intercept and destroy military missiles to protect civilian areas. A radar system picks up and detects missiles and then uses programmed software to shoot them down. This is like the earlier mentioned C-RAM rockets, in which it can only respond in a way that has been pre-programmed by software developed by humans.
The third and final type of independent machines are fully autonomous robots. These robots operate completely differently to those mentioned above. Rather than taking orders from a human instructor, these robots can act independently through their own malware and programming. In other words, the human controller is “out-of-the-loop” (Scharre and Horowitz, 2015, pp. 8). When referred to in military terms, these autonomous robots are often known by a name coined by many national militaries; Lethal Autonomous Robots (LARs).
Johnson and Axinn (2013) provide a substantial definition of how LARs might operate. They are “systems that are directed by a computer program alone, with no human in the loop. Autonomous robots may make targeting decisions and fire weapons against humans or other targets on their own based on some computer algorithm” (pp. 129). These robots can carry out independent functions without the decision-making abilities of a human supervisor. For instance, the programmer of the machine simply places an objective into the robot, and it will carry out the mission by reacting to real situations on the battlefield. One of the most interesting characteristics of these machines is that they can react to certain situations and make decisions on who to target and when to pull the trigger when faced with an enemy. This theoretical programming would allow for the robot to make their own judgments when it comes to facing an enemy in combat. Whether it is to target or kill the enemy combatant, the robot makes the decision independently. As put by Johnson and Axinn, the “human is no longer making the lethality decision” (pp. 130).
We now have a distinguishable definition of a LAR; a machine that does not require a ‘human-in-the-loop’ to make decisions and can be used as a weapon to engage enemy soldiers by its own accord. Since there is no need for a human input, a human soldier can avoid the conflict in the battlefield and arguably could prevent human casualties on the battlefield. Either way, the potential development of LARs could transform the very face of how we approach war, and could have a much higher significance than previous warfare developments.
What will happen to the human input in relation to LARs? If we are taking the decision to target and kill away from a human, are we also taking away the human decision-making process from situations in war? This is one of the key issues many have with the possible construction and implementation of LARs. In modern war, most weapons we use require a human to make decisions and be in complete control when targeting and engaging enemies (Human in the loop). There has also been the development of pre-programmed autonomous weapons to operate almost independently, yet allowing for a human to intervene if necessary (Human on the loop). However, if we accept the possible development of these LARs, humans might have to deal with the possibility of relinquishing the human control needed to make moral decisions. We simply give the machine an objective and release it onto the battlefield to do what it has been programmed to do.
It is important to note that these machines are depictions of what autonomous weapons might consist of it the future. However, even though these machines are theoretical and experimental at best, research and advances in robotics have furthered the possibility that these machines could become a reality. As the development of these machines seem inevitable, questions arise over a particular issue; where does this leave the role of humans in moral decision making? In modern war, most weapons we use require a human to make decisions and be in complete control when targeting and engaging enemies (Human in the loop). There has also been the development of pre-programmed autonomous weapons to operate almost independently, yet allowing for a human to intervene if necessary (Human on the loop). If we accept the possible development of these LARs, humans might have to deal with the possibility of relinquishing the human control needed to make moral decisions. We simply give the machine an objective and release it onto the battlefield to do what it has been programmed to do.
This is a worrying prospect, and is one that leads to questions over LARs effectiveness and reliability. More importantly, should we allow LARs to make decisions that involve targeting individuals or even killing humans if there is no accountability attached to the actions of the LAR? Whilst there are obvious short term benefits to LARs (reducing risk to soldiers), there is lingering doubt if we cannot associate blame on a first level. It is also not certain that they will be safe to use in a military capacity. J.P Sullins (2010) highlights the risk of using experimental autonomous machines without proper testing and the assurance that they are fully reliable;
“We should be entirely confident of the abilities of these systems before trying to quickly deploy them as weapons before we are certain of their impact on the ethics of the battlefield, as battle is one of the most ethically fraught of human activities, and in doing so we have not made the battlefield safer with for non-combatants in the crossfire.” (pp. 274)
This point demonstrates the complexity of morality in war, and the need for ethical boundaries to be established. The main concern surrounding LARs is the issue of responsibility. While it seems that these theoretical machines can make decisions on who to target and who to kill, it is difficult to attribute responsibility for these actions without a human input.
As seen in the previous chapter, there were several concerns about the development of LARs to be used in a military capacity. One of the main questions revolved around the removal of human input when making situational decisions. If the rise of these machines is inevitable, humans might have to relinquish decision-making abilities when concerned with situations on the battlefield. Many of these situations involve causing harm and taking of human lives. Therefore, not only would humans be giving up the ability to make decisions on the battlefield, but the ability to make moral decisions.
Even though these machines are years away from being developed, the steady development of semi-autonomous weapons has encouraged debate over the potential use of LARs on the battlefield. On one side, there are those who would argue that LARs are the next logical step for ground soldiers in battle. Having an autonomous machine that can carry out the same tasks as a regular soldier (but is better equipped) seems to a positive solution to human suffering in war. Not only do LARs seem more suited for the environment of war, using them would reduce human casualties on the battlefield. The preservation of human life must be the goal in war, and LARs seem to be an appropriate solution.
Yet, those who argue in favour of these machines still fail to see the ethical implications with their use in battle situations. The ‘inevitable’ rise of these machines seems exaggerated, and simply stating that human nature is the main reason LARs should be developed needs to be reassessed. Some positions have provided counter-arguments to the supposed role that humans play in suffering and unethical decisions in war. War is a human issue, and needs human emotions and ethical decision-making. Without these qualities, it seems that the use of LARs might neglect the need for human morality when making decisions between life and death.
This chapter will examine the main argument for the development of LARs (Arkin), and how it has been heavily influenced by the ethical position of Consequentialism. This strand of ethics places importance on carrying out actions which give the best result regarding the most good, and LARs seem to provide this. This leads to the claim that LARs are more ethical than human soldiers on the battlefield. However, this does not seem enough to justify using LARs in war. As a result, I will attempt to provide a suitable counter-argument against this claim. Arkin and Sassoli put forward their accounts on why LARs should be implemented, followed by responses from Tonkens and Johnson and Axinn. The response from the latter will be developed later in chapter three, and will discuss the main part of this paper; LARs and the issue of ‘moral responsibility’.
Before we establish the main argument for the implementation on LARs, we must first understand the position of ‘Consequentialism’. The most notable thinkers associated with this position include Jeremy Bentham and John Stuart Mill. In ‘Ethics: Consequentialism’, Julia Driver (2017) provides a short but useful summary of what it entails. Consequentialism is a type of ‘normative ethical theory’. This means it provides guidelines for making moral evaluations, and may also put forward certain rules to act morally. In its most basic form, consequentialism is based on the claim that the moral quality of an action is completely determined by its consequences. The first involves an account of what is good. The second is how we achieve the outcome that is good. The most common example of a consequentialist theory is classical ‘Act Utilitarianism’. It maintains that the right action to take is the one that maximises pleasure. In other words, the action that has the best overall consequences in terms of pleasure or happiness. The aim of this theory is to promote moral actions that result in the highest amounts of pleasure and the lowest amounts of suffering. The guiding idea behind this moral theory is to make the world a better place (Wireless Philosophy, 2017, web source).
Utilitarian consequentialism is an influencing line of thought when applied to the wider issue of war. Modern warfare is directed through a utilitarian approach, meaning it is quantitative and reductive. It places importance on minimizing the levels of suffering found in war to the largest number of people. ‘Just War Theory’ is heavily influenced by utilitarianism, in that it provides guidelines for how wars should be conducted to reduce human suffering. William H. Shaw (2016) provides a key link between utilitarian thought and the position of JWT. The first principle deals with the issue of when it is right or wrong to go to war, and is referred to as ‘jus ad bellum’. Reasons for starting a war include having the right intention, a just cause to do so and only if it is a last resort. What follows reasons to go to war are principles that guide us on how a war is to be fought once it starts. This is referred to as ‘jus in bello’, and consists of three main principles; Principle of Necessity (pp. 101), Principle of Proportionality (pp. 102) and the Principle of Discrimination and Non-Combatant Immunity (pp. 102). Shaw focuses specifically on the third principle, as he identifies similarities with utilitarianism. The principle consists of the following statement, “Belligerents must discriminate between military and non-military targets. They are not to target non-combatants and must make reasonable efforts to avoid harming them” (pp. 102).
As a result, we can give an account of what is good. In this situation, we can say the Principle of Discrimination and Non-Combatant Immunity gives an account that identifying that non-combatants and civilians are not to be targeted will produce the least amount of suffering, and maximises the best available happiness. If we avoid causing harm to non-combatants, we produce greater levels of good and minimise human suffering.
In his article ‘The Case for Ethical Autonomy in Unmanned Systems’, Ronald Arkin (2010) demonstrates a utilitarian perspective on the use of LARs in a military capacity. He puts forward the claim that LARs could potentially be ‘more ethical’ than human soldiers when faced with moral decisions in battle. He supports his claim by demonstrating the flawed nature of humans in war, and believes that this leads to unnecessary levels of human suffering. Thus, he believes using LARs will prevent risking human lives and promote better moral decisions.
Arkin’s first claim revolves around his interpretation of human nature in war, and how it contributes to unnecessary human suffering. He believes that war is inevitable, and links the horrors of war to human failures. He makes note of instances where human soldiers seem to resist the moral code of war, including hateful attitudes towards enemy. He also highlights examples of soldiers being hesitant when reporting unethical acts (pp. 335). Arkin makes note of various accounts in which soldiers acted unethically. He claims) that this failure to act ethically and report unethical acts was caused by factors including frustration, revenge, joy of killing or poor orders (pp. 336). It is also worth noting the psychological damage that war can have on a soldier, which Arkin argues can impact a soldier’s ability to act in accordance to the ethics of war;
“…it seems unrealistic to expect normal human beings by their very nature to adhere to the Laws of Warfare when confronted with the horror of the battlefield, even when trained.” (pp. 338)
The influence of emotions on human soldiers identified by Arkin leads to the difficulty of making humans into efficient soldiers. A World War Two study reported a low rate of soldiers targeting enemy soldiers. Instead of following orders, these soldiers have disobeyed military orders. Arkin believes this may be down to soldiers lacking an “’offensive spirit’” (pp. 337). Adding to this, another reason why humans make bad soldiers involves our reasoning for going to war. Different countries and states initiate wars for unethical reasons. These include the draws of profit, power, tactical utility and potential genocide. Arkin is certain that the ‘inevitability’ of automated warfare will encourage humans to use highly advanced weapons to commit further immoral acts if not restrained;
“Something must be done to restrain technology itself, above and beyond the human limits of war-fighters themselves. This is the case for the use of ethical autonomy in unmanned systems.” (pp. 338)
Because of these human shortcomings, Arkin believes his claim for using LARs in war instead of human soldiers is justified. LARs could perform better than humans in the practical sense, but also ethically. Their ability to better adhere to rules of war and reducing human casualties make them a viable solution. He puts forward key criteria of LARs that will make them better suited for being deployed on the battlefield;
- LARs possess the ability to “act conservatively” (pp. 333), and have no need to target and kill in the act of self-defence.
- LARs are better equipped – Robots have better equipment and can gather more information when deployed on the battlefield.
- Having no emotion, there is no need to act out of anger, fear, frustration or jealousy. These emotions prompt unethical actions, which are a cause of human suffering in war.
- LARs avoid the problem of “scenario fulfilment” (pp. 333). This is a psychological singularity that leads to the falsification of contradictory information in stressful situations. Humans tend to use this information to fit their pre-existing belief patterns. This will be prevented if LARs are used on the battlefield.
- LARs can integrate information than the average human soldier.
- LARs could “independently and objectively monitor ethical behaviour in the battlefield” (pp. 334)
In short, Arkin’s position on LARs is heavily influenced by consequentialist thinking. If human nature is a cause of unnecessary suffering, then LARs are a better alternative to minimise these levels of suffering. Additionally, LARs are much better suited to the environment of war (both practically and ethically). As a result, Arkin believes that LARs can “enhance mission effectiveness and serve as drivers for the deployment of these systems” (pp. 334).
It is reasonable to think why Arkin’s argument seems promising. From a practical perspective, the reduced risk of human soldiers means reduced fatalities on the battlefield and having expendable foot soldiers. Additionally, the fact that LARs are better equipped and immune to certain attacks means we can distance human participation further away from the dangers of the battlefield. Yet, this can be seen in a negative light. Ryan Tonkens (2012) provides a significant response to Arkin’s calls for the development of LARs.
He is doubtful that this ‘urgency’ to replace humans with robots is too exaggerated. Tonkens understands the reasoning behind wanting to lessen human suffering in war, but struggles to see why LARs will automatically solve the problem. Arkin believes that there will be less unethical human behaviour that contributes to suffering if we use LARs on the battlefield. Tonkens does not accepet this, and believes we cannot be justified by simply assuming LARs will be more ethical than human soldiers on the battlefield. As these machines are theoretical and years from being developed, using them has no guarantee that there will be better ethical behaviour. He makes note of Arkin’s claim that human soldiers are more likely to carry out immoral actions. In doing so, Tonkens responds by stating that having the ability to act immorally does not mean immoral acts will be carried out. Additionally, human soldiers are more than capable of exhibiting “morally praiseworthy and supererogatory behaviour” (pp. 151). In other words, human soldiers can carry out acts that involve bravery, heroism and sacrifice in battle. Tonkens believes that the introduction of LARs may reduce chances of demonstrating these acts of bravery, and human-qualities that are required for war may be lost. This would ultimately lead to warfare becoming inhumane and ‘de-humanized’.
Tonkens believes that Arkin’s claims do not solve the problem of human suffering in war. To achieve this, he believes Arkin must provide a better argument for the use of LARs. Until then, it seems that Arkin exaggerates the need for these machines, and takes it for granted that LARs will make decisions on the battlefield that will provide less human suffering than human actions.
Following Tonken’s criticism, Johnson and Axinn provide a more extensive counter-argument to Arkin’s position. They reject the idea that LARs should be deemed permissible to use on the battlefield as the decision to kill a human being should not be relinquished to a machine. Whilst are yet to become a reality, Johnson and Axinn believe the ethical implications should be considered sooner rather than later. They refer to the regret felt by those working on the Manhattan Project, and that the ethical considerations were not considered until after the use of the atomic bomb in the Second World War (pp. 129).
In ‘The Morality of Autonomous Robots’, Johnson and Axinn (2013) raise initial questions over the potential use of LARS. They provide three key issues of using these machines in battle situations. The first highlights the issue of tactical benefit. Some would argue that the clear benefit to soldiers and reduced risk means we should make us feel obligated to develop LARs. Johnson and Axinn reject this idea, and argue that tactical benefit alone is insufficient for justifying the use of LARs on the battlefield. Secondly, killing a human with a machine represents a significant loss of dignity. For example, a LAR would kill enemy soldiers because of their basic programming. There is no moral-decision making process present, meaning LARs would be treating human soldiers as a mere means to achieve objectives. Johnson and Axinn believe that human soldiers should be treated as ends in themselves, and that LARs are not capable of doing this on the battlefield.
The third area of concern for Johnson and Axinn is LARs killing human soldiers and who claims responsibility. They emphasise the ‘big question’ surrounding LARs and killing on the battlefield; “Should we relinquish the decision to kill a human to a non-human machine?” (pp. 134). They answer no, as killing is a moral decision that can only be made my humans. A moral action requires a certain level of autonomy, rather than following someone else. LARs may seem autonomous, as they do not require a human input when making decisions on the battlefield. But this is not acting autonomously in the same sense as a human soldier. LARs cannot make a moral decision as they are simply made up of ‘cogs and computer programming’. Johnson and Axinn sum this point up with the following quote; “In this way a machine can ‘act’ morally, by mimicking its programmer, but it cannot ‘be’ moral” (pp. 135). They also note that moral action requires certain levels of emotion and right intention. If we remove the human element from moral scenarios that involve killing on the battlefield, we run the risk of distancing ourselves further from the harsh realities of war. Johnson and Axinn quote Sifton to suggest that LARs will “foreshadow the idea that brutality could become detached from humanity-and yield violence that is, as it were, unconscious” (pp. 136).
From this point, we can deduce that moral decisions require certain human characteristics. Johnson and Axinn highlight that the moral decision to kill on the battlefield requires human reflection, intention and emotion. LARs do not possess these characteristics, and cannot be considered ethical if they can kill on the battlefield. As a result, Arkin’s claim that LARs can be more ethical than human soldiers does not take the moral seriousness of killing into account.
Arkin’s consequentialist position attempts to justify the use of LARs on the battlefield by putting forth the claim that they would be better suited to the harsh environment of war. As a result, there would be less human soldiers being put at risk and prevent any unnecessary human suffering. Additionally, a LAR is less likely to act unethically on the battlefield and make decisions which result in human suffering. Arkin’s argument involving the nature of humanity suggests that humans cannot be guaranteed to make the correct decisions whilst on the battlefield. Sassoli supports this claim, stating that emotions cloud a human soldier’s situational judgement. LARs do not possess these human emotions, meaning they are more likely to pursue the right action to bring about the best result.
On the other hand, it seems that the need to replace human influence on the ground is overstated. Tonkens argument suggests that Arkin’s assumption that LARs will solve the problem of human unethical behaviour is exaggerated, and a stronger case for the use of LARs is needed to justify minimizing human input of battlefield decisions. This is followed by Johnson and Axinn, who believe the use of LARs will treat enemy soldiers as a mere means, rather than an end in themselves. Additionally, there is still a persistent struggle to identify who is to blame when a LAR kills a human on the battlefield. This issue remains unsolved, and seems to be a key stumbling block for those in support of using LARs in war.
The previous chapters have examined distinct types of autonomous weapons and examined various positions on the development of LARs. Consequentialists (Arkin) argue in favour of their development, claiming they can reduce human suffering on the battlefield and are better suited to the environment of war (practically and ethically). In response, Tonkens believes this position is exaggerated, and the use of LARs will not guarantee the result of human suffering. Johnson and Axinn take this further by arguing for a premature ban on the LARs. They establish that removing the human decision-making process from the battlefield further distances us from the harsh realities of war. Additionally, allowing LARs to kill raises serious concerns regarding human dignity and moral responsibility. This leads Johnson and Axinn to refute Arkin’s claim that LARs would be more ethical than human soldiers, and pose difficulties for those who would argue for their use in war.
In this chapter, I will attempt to support the positon argued by Johnson and Axinn. The issue of moral responsibility is a fundamental hurdle for those supporting the use LARs. By successfully ascribing moral responsibility to human soldiers and other autonomous machines, it will be clear that it is a more difficult feat to achieve with LARs. This will result in their inability to be held morally responsible for their actions, which will also lead to the conclusion that they cannot make moral decisions. With this conclusion, LARs cannot be more ethical than human soldiers, and should not be developed for military purposes.
Before we attempt to establish the argument against LARs, we must first understand the concept of ‘moral responsibility’. Most debates involving moral responsibility are focused around human involvement with moral action. Attempting to ascribe moral responsibility comprises of human moral agents carrying out actions that have significant consequences (Noorman, 2012). Using an example put forward by Gilbert Ryle, Joel Feinberg (1970) claims that we do not usually ascribe responsibility unless “something has somehow excited our interest; and, as a matter of fact, the states of affairs that excite our interests are often unhappy ones” (pp. 131). In other words, we usually seek to ascribe responsibility when someone has carried out a negative action or offense. It is worth noting that acts promoting happiness need accounting for too, yet this interest is to understand rather than solely give blame or praise.
When we examine moral responsibility, there are certain conditions required to do so successfully. Eshelman (2009) proposes that the ascription of moral responsibility must involve three key ideas. First, there must be a “causal connection between the person and the outcomes of actions” (Web Source). An individual can only be held responsible if they can control the results of the action. Secondly, the individual must have “knowledge of and be able to consider the possible consequences” (Web Source) of their actions. If the individual is unable to know the negative consequences of an action, they could be excused from being at fault. The final idea claims that the individual must be “able to freely choose to act in a certain way” (Web Source). In other words, it would be illogical to hold someone accountable for events that are determined by forces unrelated to the individual.
This framework is important when applied to actions within war. The nature of war involves actions that include targeting and killing of enemy soldiers. These are actions that result in certain consequences, and the ascription of moral responsibility is often required. Take the example of a human soldier. They have been deployed alongside other soldiers to secure a checkpoint on the battlefield. They have been given mission objectives, and are under orders to achieve them by any means. This involves potentially targeting and killing enemy soldiers. If the soldiers encounter enemies, and they prevent the soldiers succeeding in their mission, the soldiers can target the enemy soldiers and kill them if necessary.
This example provides an instance where moral responsibility can be ascribed. Firstly, when the human soldier targets and enemy, he shoots to kill. Alongside this, they could also choose not to shoot. This demonstrates a causal connection between the soldier and the outcome of his actions, meaning he can control the outcomes of his actions. Secondly, the soldier is aware of the consequences when he targets the enemy. The action of targeting and shooting to kill will result in the enemy’s death. Lastly, the human soldier freely chooses to kill the enemy without external forces interfering. As a result, the human soldier can be ascribed moral responsibility for targeting and killing an enemy solider. This can be summarised as follows:
- If we are to ascribe moral responsibility to an agent, there must be a causal connection between the agent and result of an action, the agent must be aware of the potential consequences of the action and able to freely choose to carry out the action.
- A human soldier possesses these traits.
- Therefore, moral responsibility can be ascribed to a human soldier.
In the instance involving a human solider, it seems straightforward to ascribe moral responsibility. However, it becomes more difficult when autonomous weapons are involved. These machines create problems to how moral responsibility can be ascribed on the battlefield. Machines are not moral agents and cannot reflect on the consequences of their actions. However, we can still ascribe responsibility to human agents in control of these weapons. A non-autonomous military drone is operated by a human pilot. The pilot controls the drone from afar, but still dictates the actions that the drone carries out. For example, the pilot commands the drone to target and engage enemy soldiers in a small town. The pilot can control the actions of his outcomes, he is aware of the consequences that the action of targeting enemy soldiers will result in (death) and is not at the mercy of external forces affecting his decision. This fits the criteria set out earlier in the chapter, and allows for the successful ascription of moral responsibility.
- If we are to ascribe moral responsibility to an agent, there must be a causal connection between the agent and result of an action, the agent must be aware of the potential consequences of the action and able to freely choose to carry out the action.
- A drone pilot possesses these traits.
- Therefore, moral responsibility can be ascribed to the drone pilot.
Semi-autonomous weapons are slightly more difficult in ascribing moral responsibility. Using the example of an AWDs, it (at first) seems uncertain if responsibility can be attribute to a human input at all. The engineer who designed the machine has simply programmed it to carry out a pre-determined programme. The AWDS will simply react to enemy soldiers fitting criteria that will activate the weapon system to engage them. As a result, attributing responsibility is difficult. Yet, taking this approach misses the wider picture. When the machine was developed by the engineer, the ‘pre-determined’ criteria for engaging enemy soldiers was drawn up by the engineer himself. The engineer has a clear idea of who will be targeted and is aware of the consequences when the machine engages enemies. There are also no external forces affecting his awareness and decision, and the engineer of the AWDS can be held responsible for the actions it carries out. Hence, the programmer also fits the criteria set earlier in the chapter:
- If we are to ascribe moral responsibility to an agent, there must be a causal connection between the agent and result of an action, the agent must be aware of the potential consequences of the action and able to freely choose to carry out the action.
- A AWDS programmer possesses these traits.
From the above examples, we can see the successful ascription of moral responsibility to actions that involve targeting and killing enemies on the battlefield. The human soldier, drone pilot and AWDS engineer all fit the criteria set in section (I). Yet, can the same ascription of moral responsibility be applied to a LAR carrying out the same actions? A LAR is sent into battle, and comes face to face with enemy soldiers. Possessing highly advanced robotic technology, the LAR can independently make decisions on the battlefield. As a result, they can target enemy soldiers and decide to kill if necessary. There is no human input or pre-determined list of who to target. The LAR simply reacts to the environment it is placed in and can target enemy soldiers without a human controller. We can agree that the LAR is the agent can carry out the action of targeting and killing. However, because there is no moral input from a human controller, it is difficult to ascribe moral responsibility to the LAR successfully. To ascribe it, there must be some human input present that caused the action. This is especially relevant to actions that involve killing. If this is true, we cannot ascribe moral responsibility to LARs at all. This is again evident when applied to the necessary criteria for moral responsibility:
- A LAR cannot possess these traits.
C) Therefore, responsibility cannot be ascribed to a LAR.
If we cannot ascribe moral responsibility to LARs, there are many issues that arise if a LAR could be allowed to kill on the battlefield. Johnson and Axinn (2015) highlight these points in their earlier argument. Giving a LAR the ability to take a human soldier’s life should not be permissible if they are not morally responsible. Johnson and Axinn believe that LARs kill humans through programming and hardware, not through the moral awareness. Furthermore, the absence of intention and emotion in a LARs decision to target an enemy neglects the requirements for moral action. By simply ‘letting LARs loose’ on the battlefield distances human moral input when making decisions involving targeting. Johnson and Axinn highlight this by quoting Sifton;
“The unique technology allows the mundane and regular violence of military force to be separated further from human emotion… foreshadow the idea that brutality could become detached from humanity – and yield violence that is, as it were, unconscious” (pp. 136).
This conclusion leaves those in support of the development of LARs in a difficult position. On the other hand, those in favour of using LARs could provide a possible response. This would include adapting the given description of a LAR. The definition in the first chapter indicates that the machine can make independent decisions on the battlefield without a human controller. A possible argument could be developed by the pro-LAR position, as there is no indicator of how advanced this decision-making ability is. Being a theoretical possibility, we could develop LARs to have a similar decision-making ability as human solders as a way of successfully ascribing moral responsibility to the machine. If the LAR is this advanced, then there doesn’t seem to be an issue concerning moral responsibility.
This sounds very theoretical, and comes close to entering the debate involving ‘artificial intelligence’. However, Dennett (1997) believes this argument might not be as far-fetched as many think. He follows the line of thought that goes against the idea that human beings are the only agents that responsibility can be ascribed to. Advances in computer-based technology and the possibility of artificial intelligence pose a significant challenge to the view that only human beings can have moral responsibility ascribed to them (Betchel, 1985). Dennett’s argument revolves around the claim that we can ascribe moral responsibility to a machine if it possesses the ability of “higher order intentionality” (pp. 354). Dennett believes this ability could allow autonomous machines to possess beliefs and desires, and potentially have some form of moral awareness. If this is so, then a LAR with this ability could understand the consequences of its actions. This is due to its actions being derived from its own moral awareness. As a result, it would mean that LARs could possibly be ascribed moral responsibility for certain actions. Although it is not likely that advances in AI will end up resulting in machines exactly like this, Dennett remains adamant that the development of systems with higher-order intentionality is a genuine possibility. This seems appealing, especially if we apply this to LARs. If there is a possibility of developing a LAR with this programming, we could potentially hold LARs equal to human soldiers when making decisions on the battlefield. More importantly, we would be able to ascribe moral responsibility for actions that involve targeting enemy soldiers.
Unfortunately, this argument is only possible if relying on a definition of a LAR that requires higher levels of AI. In the case of killing on the battlefield, we are only concerned with a LAR that fits the original definition of making independent decisions without a ‘human-in-the-loop’. At present, there is no technology to suggest a LAR with prominent levels of AI will be developed soon. This is emphasised by Barandiaran et al (2009), who suggests the requirements needed to develop an AI of this level dictate that “much is still needed for the development of artificial forms of agency” (pp. 12). Hence, we must find another way to ascribe responsibility to a LAR that kills in the first instance. If this cannot be achieved, we are stuck with the conclusion that LARs cannot be held morally responsible for actions on the battlefield and are not ethical to use.
Perhaps there is a way of ascribing moral responsibility to the LARs commanding officer. Soldiers on the battlefield take orders from a higher command, and are given objectives to achieve when carrying out missions. The commanding officer is aware that the soldiers in his unit will target and kill enemy soldiers if needed. If this is so, he is aware of the consequences of the soldiers killing. When applied to the LAR scenario, it could be argued that the commanding officer is in the same position. However, this does not fill the two remaining points needed to ascribe moral responsibility. The commanding officer is not the one who pulls the trigger, the soldiers and the LAR carry out the act. Moreover, when soldiers are on the battlefield, they are effectively ‘in the dark’. The commanding officer may be able to have limited communication with them, but cannot make the soldiers carry out the decision of targeting and killing. This is only possible for the soldier or LAR to do, as they are pulling the trigger.
Marco Sassoli (2013) attempts to rectify this. He claims that the main issue with ascribing moral responsibility to LARs comes about due to human involvement being ‘before’ the action is carried out by the LAR. He attempts to argue that “human beings are subject to legal rules, and only human beings are obliged to follow them” (pp. 2). In relation to LARs, responsibility only applies to the ones who manufacture and programme the LAR for purposes decided by human authorities. Sassoli expands on this by claiming that, no matter how far we go when developing weapons technology and autonomous robots in war, there will always be some aspect of humanity involved. Humans will decide if LARs are developed, and how they are developed. When these LARs are created in the possible near future, human engineers will be responsible for creating them. Sassoli believes that these humans are liable to blame, whereas a LAR is not. Hence, the claim that LARs should not be allowed to make decisions such as killing seems irrelevant.
On the other hand, this response still misses the same component. To successfully determine moral responsibility for certain actions, there must be a causal connection between the agent who carries out the act and the consequences of the action. The LAR is the one who carries out the action of targeting enemies, pulling the trigger and killing them. In other words, the commanding officer of the LAR is not making the decision to kill an enemy soldier. The LAR itself is independently making the decision to pull the trigger and lethally engage. Without a human moral decision-making process when targeting enemy soldiers in the first instance, responsibility still cannot be ascribed.
This is not to rule out the wider scope of responsibility mentioned by Sassoli. If LARs are developed and eventually put into war situations, there are obvious parties responsible for their actions. Developers, manufacturers and military officials are the ones who put the machines into the situation, and are therefore responsible for their actions (both right or wrong). Yet, it still seems unethical to use LARs when we cannot successfully ascribe moral responsibility for their own actions. To be held morally accountable for actions, we must have a causal connection between the agent and the consequences of the action. They must also be aware of the action they are carrying out and its potential results. If these criteria are not fulfilled, then LARs cannot successfully be ascribed moral responsibility. Therefore, they are unethical and should not be used in a military capacity.
The eagerness to develop LARs for use in the military brings up a wider issue on the topic of war. By adhering to regulations influenced by Just War Theory and Consequentialism, it can be suggested that we have lost sight of what we should be hoping to achieve when addressing matters of conflict. Mehta (2010) believes these moral frameworks are not based on an ethical rejection of war itself. Rather than addressing the wider picture of war, we strive to improve conditions of war to make it ‘more bearable’. By attempting to provide arguments for the use of LARs on the battlefield, we are simply ignoring the fact that war is a significant moral issue. Zehfuss (2010) demonstrates this by referring to the use of ‘precision bombs’. By relying on consequentialist criteria, the focus is to minimize the levels of suffering to the greatest number. By developing a bomb that can accurately target enemy soldiers, there is more inclination to use it than a more regular type. This is due to the potential reduction of non-combatant casualties. If we apply this thinking to LARs, we are aware that they would be (arguably) more practical to use than human soldiers. If this is the case, Zehfuss would argue that we would feel more inclined to use these robots to improve conditions of war. Rather than trying to address the larger issues of war and conflict, we are using technology and advanced weaponry to turn a blind eye on the brutality of war;
“The problem of ethics is precisely that of confronting a question that does not have the sort of appealing answer that resolving dilemmas of warfare through the technical fix of high-tech weaponry would provide.” (Zehfuss, 2010. pp. 562)
Unsurprisingly, these approaches to war seem likely to continue. With the ever-changing environment of war, technology will be developed to keep up with rival nations. The potential development of LARs and artificial intelligence will follow suit, and remain focused on approaching moral issues of war with computing and machinery.
This focus on using technology to solve ethical issues on the battlefield leaves too much uncertainty when it concerns actions that involve targeting and killing. One of the key points discussed in this paper involves using LARs to target and kill human soldiers. If this is the case, then it is wrong that a human solider should be targeted or killed by a machine without the ability to be ascribed moral responsibility. Yet, this should not rule out the possibility that fully autonomous weapons can be used in another capacity on the battlefield. As stated by Arkin, these robots would be highly advanced and developed to withstand the environment of war. If this is true, we could use autonomous robots in missions that no not involve targeting and killing human soldiers. For instance, autonomous robots could be used to enter environments that have been subject to chemical weapon attacks. Human soldiers would be more vulnerable in these situations, and using autonomous robots would reduce risk and human casualties. Autonomous machines could be used for search and rescue missions, and re-construction after a war has finished. These capacities can already be seen with modern day autonomous robots. For example, bomb disposal robots can be controlled from a safe distance when disarming bombs. This allows for less risk to humans, and potentially reducing human suffering in the process. This is only one example of the potential use of autonomous robots in war. Yet, if we allow for militaries to use LARs on the battlefield with the ability to target human soldiers, then this is where the moral issues arise. Therefore, we can either wait until there is a type of LAR that can be morally aware of its own actions, or discard the consequentialist claim that they are ethical and prevent their development.
In summary, I have established an argument against the claim that LARs would be a practical and ethical solution to using human soldiers on the battlefield. This involves taking the human element out of making decisions that involve targeting and killing enemy soldiers. The consequentialist (Akin) would argue that this would reduce the risk to human soldiers and promote better decisions being made in war. Yet, there are concerns over the difficulties ascribing moral responsibility for their actions if it involves targeting and killing. Through my discussion, I have demonstrated that agents that carry out moral actions must fit criteria to be ascribed moral responsibility; a causal connection must be present between and agent and the action, the agent must be aware of the potential moral consequences of the action, and they must freely choose to carry out the act. A human soldier fits these criteria, as do drone pilots and AWDS programmers. This cannot be said for LARs, as they do not have the ability to be morally aware of the potential consequences of their actions. Hence, if they cannot be successfully ascribed moral responsibility, then it would seem unethical to use them to kill enemy soldiers in war.
This paper will hopefully demonstrate the ethical issues that surround the development fully autonomous robots and LARs. We should not underestimate the reality of weapons development and more daring militaries. This branch of technology is advancing at an enormous rate, and it is a genuine possibility that these types of machine will become a reality. It also highlights concerns of how we approach the issue of war. Western political ideologies are focused primarily on the pursuit of security, and the influence of ethical theories like consequentialism result in more importance being place on outcomes of actions. In other words, policies that intend to create machines that can cope with the brutality of war and reduce risk to human soldiers will continue. If these approaches lead to the development of more advanced LARs, the question remains over human influence in war. Granted, the possible development of these LARs could see reduced risk to humans on the battlefield. Yet, re-visiting the concern put forward by Johnson and Axinn, distancing ourselves from the brutality of the battlefield by simply letting robots carry out actions that involve the killing of human soldiers is a genuine moral concern. The inevitable rise of these machines must lead us to consider the ethical implications, be it sooner rather than later. If we do not, it may leave the roles of moral-decision making and moral responsibility in doubt.
- Arkin, R. (2010). ‘The Case for Ethical Autonomy in Unmanned Systems’. Journal of Military Ethics, 9(4), pp.332-341.
- Barandiaran, X.E., E. Di Paolo, and M. Rohde. (2009). ‘Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-Temporality in Action’, Adaptive Behavior, 17(5): 367–386.
- Bechtel, W. 1985. ‘Attributing Responsibility to Computer Systems’, Metaphilosophy, 16(4): 296–306.
- Dennett, D. C. (1997). ‘When HAL Kills, Who’s to Blame? Computer Ethics,’ in HAL’s Legacy: 2001’s Computer as Dream and Reality, D. G. Stork (ed.), Cambridge, MA: MIT Press.
- Driver, J (2017). ‘Philosophy – Ethics: Consequentialism’. Wireless PhilosophyAvailable at: https://youtu.be/hACdhD_kes8 [Accessed 31 Mar. 2017].
- Eshleman, A. (2016) ‘Moral Responsibility’, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/>.
- Feinberg, J. (1970). ‘Doing & deserving’. 1st ed. Princeton, N.J.: Princeton University Press.
- Johnson, A. and Axinn, S. (2013). ‘The Morality of Autonomous Robots’. Journal of Military Ethics, 12(2), pp.129-141.
- Mehta, U. (2010). ‘Gandhi on Democracy, Politics and the Ethics of Everyday Life’. Modern Intellectual History, 7(02), pp.355-371.
- Noorman, M. (2016) ‘Computing and Moral Responsibility’, The Stanford Encyclopedia of Philosophy Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/computing-responsibility/>.
- Sassoli, M. (2013) ‘Autonomous Weapons – Potential advantages for the respect of international humanitarian law’, Professionals in Humanitarian Assistance and Protection, pp. 1-5.
- Shaw, W.H. (2016) ‘Utilitarianism and the Ethics of War’, Routledge,
- Sullins, J. (2010). ‘RoboWarfare: can robots be more ethical than humans on the battlefield?’. Ethics and Information Technology, 12(3), pp.263-275.
- Tonkens, R. (2012). ‘The Case Against Robotic Warefare: A Response to Arkin’. Journal of Military Ethics, 11(2), pp.149-168.
- Zehfuss, M. (2010) ‘Targeting: Precision and the production of ethics’, European Journal of International Relations, 17(3), pp. 543–566
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: