Since its release on Monday, the Clinic’s joint report with Human Rights Watch on “killer robots” has been attracting quite a bit of attention. Check out articles in The Guardian, as well as segments on Democracy Now and the BBC.

Bonnie also wrote an excellent Op-Ed about the issue for Foreign Policy magazine, which is reprinted in full below.

The Trouble with Killer Robots

Imagine a mother who sees her children playing with toy guns as a military force approaches their village. Terrified, she sprints toward the scene, yelling at them to hurry home. A human soldier would recognize her fear and realize that her actions are harmless. A robot, unable to understand human intentions, would observe only figures, guns, and rapid movement. While the human soldier would probably hold fire, the robot might shoot the woman and her children.

Despite such obvious risks to civilians, militaries are already planning for a day when sentry robots stand guard at borders, ready to identify intruders and to kill them, without an order from a human soldier. Unmanned aircraft, controlled only by pre-programmed algorithms, might carry up to 4,500 pounds of bombs that they could drop without real time authorization from commanders.

While fully autonomous robot weapons don’t exist yet, precursors have been deployed or are in development stages. So far, these precursors still rely on human decision making, but experts expect them to be able to choose targets and fire without human intervention within 20 to 30 years. Crude models could potentially be available much sooner. If the move toward increased weapons autonomy continues, images of war from science fiction could become more science than fiction.

Replacing human soldiers with “killer robots” might save military lives, but at the cost of making war even more deadly for civilians. To preempt this situation, governments should adopt an international prohibition on the development, production, and use of fully autonomous weapons. These weapons should be stopped before they appear in national arsenals and in combat.

Fully autonomous weapons would be unable to comply with the basic principles of international humanitarian law — distinction, proportionality, and military necessity — because they would lack human qualities and judgment.

Distinguishing between combatant and noncombatant, a cornerstone of international humanitarian law, has already become increasingly difficult in wars in which insurgents blend in with the civilian population. In the absence of uniforms or clear battle lines, the only way to determine a person’s intentions is to interpret his or her conduct, making human judgment all the more important.

Killer robots also promise to remove another safeguard for civilians: human emotion. While proponents contend that fully autonomous weapons would be less likely to commit atrocities because fear and anger wouldn’t drive their actions, emotions are actually a powerful check on the killing of civilians. Human soldiers can show compassion for other humans. Robots can’t. In fact, from the perspective of a dictator, fully autonomous weapons would be the perfect tool of repression, removing the possibility that human soldiers might rebel if ordered to fire on their own people. Rather than irrational influences and obstacles to reason, emotions can be central to restraint in war.

Fully autonomous weapons would also cloud accountability in war. Without a human pulling the trigger, it’s not clear who would be responsible when such a weapon kills or injures a civilian, as is bound to happen. A commander is legally responsible for subordinates’ actions only if he or she fails to prevent or punish a foreseeable war crime. Since fully autonomous weapons would be, by definition, out of the control of their operators, it’s hard to see how the deploying commander could be held responsible. Meanwhile, the programmer and manufacturer would escape liability unless they intentionally designed or produced a flawed robot. This accountability gap would undercut the ability to deter violence against civilians and would also impede civilians’ ability to seek recourse for wrongs suffered.

Despite these humanitarian concerns, military policy documents, especially in the United States, reflect the move toward increasing autonomy of weapons systems. U.S. Department of Defense roadmaps for development in ground, air, and underwater systems all discuss full autonomy. According to a 2011 DOD roadmap for ground systems, for example, “There is an ongoing push to increase [unmanned ground vehicle] autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.” Other countries, including China, Germany, Israel, Russia, South Korea, and the United Kingdom, have also devoted attention and money to autonomous weapons.

The fully autonomous sentry robot and aircraft alluded to above are in fact based on real weapons systems. South Korea has deployed the SGR-1 sentry robot along the Demilitarized Zone with North Korea, and the United States is testing the X-47B aircraft, which is designed for combat. Both currently require human oversight, but they are paving the way to full autonomy. Militaries want fully autonomous weapons because they would reduce the need for manpower, which is expensive and increasingly hard to come by. Such weapons would also keep soldiers out of the line of fire and expedite response times. These are understandable objectives, but the cost for civilians would be too great.

Taking action against killer robots is a matter of urgency and humanity. Technology is alluring, and the more countries invest in it, the harder it is to persuade them to surrender it. But technology can also be dangerous. Fully autonomous weapons would lack human judgment and compassion, two of the most important safeguards for civilians in war. To preserve these safeguards, governments should ban fully autonomous weapons nationally and internationally. And they should do so now.