Mike Ryder :
Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defence.
The clarification comes amid fears about a new advanced targeting system, known as Atlas, that will use artificial intelligence in combat vehicles to target and execute threats.
While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine gun-wielding “Swords” robots were deployed in Iraq in 2007.
Our relationship with military robots goes back even further than that. This is because when people say “robot”, they can mean any technology with some form of autonomous element that allows it to perform a task without the need for direct human intervention.
These technologies have existed for a very long time. During World War II, the proximity fuse was developed to explode artillery shells at a predetermined distance from their target. This made the shells far more effective than they would otherwise have been, by augmenting human decision-making and, in some cases, taking the human out of the loop completely.
So the question is not so much whether we should use autonomous weapon systems in battle; we already use them and they take many forms. Rather, we should focus on how and why we use them and what form – if any – human intervention should take.
My research explores the philosophy of human-machine relations, with a particular focus on military ethics, and the way we distinguish between humans and machines.
During World War II, mathematician Norbert Wiener laid the groundwork of cybernetics – the study of the interface between humans, animals and machines – in his work on the control of anti-aircraft fire. By studying the deviations between an aircraft’s predicted motion and its actual motion, Mr Wiener and his colleague Julian Bigelow came up with the “feedback loop” concept, where deviations could be fed back into the system to correct further predictions.
Mr Wiener’s theory, therefore, went far beyond mere augmentation, for cybernetic technology could be used to pre-empt human decisions – removing the fallible human from the loop – in order to make better, quicker decisions and thus make weapons systems more effective.
In the years since World War II, the computer has emerged to sit alongside cybernetic theory to form a central pillar of military thinking, from the laser-guided “smart bombs” of the Vietnam era to cruise missiles and Reaper drones.
It’s no longer enough to merely augment the human warrior as it was in the early days. The next phase is to remove the human completely – “maximising” military outcomes while minimising the political cost associated with the loss of allied lives. This has led to the widespread use of military drones by the United States and its allies.
While these missions are highly controversial, in political terms, they have proved to be preferable by far to the public outcry caused by military deaths.
One of the most contentious issues relating to drone warfare is the role of the drone pilot or “operator”. Like all personnel, these operators are bound by their employers to “do a good job”.
However, the terms of success are far from clear. As philosopher and cultural critic Laurie Calhoun observes: “The business of UCAV operators is to kill.” (UCAV refers to unmanned combat aerial vehicle).
In this way, their task is not so much to make a human decision, but rather, to do the job that they are employed to do. If the computer tells them to kill, is there really any reason they shouldn’t?
A similar argument can be made with respect to the modern soldier.
From GPS navigation to video uplinks, soldiers carry numerous devices that tie them into a vast network that monitors and controls them at every turn.
This leads to an ethical conundrum. If the purpose of the soldier is to follow orders to the letter – with cameras used to ensure compliance – then why do we bother with human soldiers at all?
After all, machines are far more efficient than human beings and don’t suffer from fatigue and stress in the same way as a human does. If soldiers are expected to behave in a programmatic, robotic fashion anyway, then what’s the point in shedding unnecessary allied blood?
The answer is that the human serves as an alibi or form of “ethical cover” for what is, in reality, an almost wholly mechanical, robotic act.
Just as the drone operator’s job is to oversee the computer-controlled drone, so the human’s role in the Department of Defence’s new Atlas system is merely to act as ethical cover, in case things go wrong.
While Predator and Reaper drones may stand at the forefront of the public imagination about military autonomy and “killer robots”, these innovations are, in themselves, nothing new. They are merely the latest in a long line of developments that go back many decades.
While it may comfort some people to imagine that machine autonomy will always be subordinate to human decision-making, this really does miss the point. Autonomous systems have long been embedded in the military and we should prepare ourselves for the consequences.
(Mike Ryder, Associate Lecturer of Philosophy, Lancaster University in Britain)