Controversy Brews Over Role Of ‘Killer Robots’ In Theater of War

mfc-amas-photo-01-hTechnology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas.

As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. Several military powers, including the United States, the United Kingdom, Israel and China, are already using partially autonomous weapons in combat and are almost certainly pursuing other advances in private, according to experts.

The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. The humanoid robots Google recently bought are neat, but most machines being used or tested by national militaries are, for now, more like robotic weapons than robotic soldiers. Still, the line between useful weapons with some automated features and robot soldiers ready to kill can be disturbingly blurry.

Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Their use in war would likely save lives in the short run, but many worry that they would also result in more armed conflicts and erode the rules of war — and that’s not even considering what would happen if the robots malfunctioned or were hacked.

Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. And the U.N., driven in part by a 2013 report by Special Rapporteur Christof Heyns, has set a meeting in May for nations to explore that and other limits on the technology.

“Robots should not have the power of life and death over human beings,” Heyns wrote in the report.

There’s no doubt that major military powers are moving aggressively into automation. Late last year, Gen. Robert Cone, head of the U.S. Army’s Training and Doctrine Command, suggested that up to a quarter of the service’s boots on the ground could be replaced by smarter and leaner weaponry. In January, the Army successfully tested a robotic self-driving convoy that would reduce the number of personnel exposed to roadside explosives in war zones like Iraq and Afghanistan.

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

MAARS-robotWhat if, for example, a robot tasked with destroying an unmanned military installation instead destroyed a school? Robotic sensing technology can only barely identify big, obvious targets in clutter-free environments. For that reason, the open ocean is the first place robots are firing on targets. In more cluttered environments like the cities where most recent wars have been fought, the sensing becomes less accurate.

The U.S. Department of Defense directive, which insists that humans make kill decisions, nonetheless addresses the risk of “unintended engagements,” as a spokesman put it in an email interview with Singularity Hub.

Sensing and artificial intelligence technologies are sure to improve, but there are some risks that military robot operators may never be able to eliminate.

Some issues are the same ones that plague the adoption of any radically new technology: the chance of hacking, for instance, or the legal question of who’s responsible if a war robot malfunctions and kills civilians.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’s MANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

NASA is building humanoid robots.
NASA is building humanoid robots.

Or consider a real scenario. The U.S. sometimes programs its semi-autonomous drones to locate a terrorist based on his cell phone SIM card. The terrorists, knowing that, often offload used SIM cards to unwitting civilians. Would an autonomous killing machine be able to plan for such deception? Even if robots plan for particular deceptions, the history of the web suggests that terrorists could find others.

Of course, most technologies stumble at first and many turn out okay. The militaries developing war-fighting robots are assuming this model and starting with limited functions and use cases. But they are almost certainly working toward exploring disruptive options, if only to keep up with their enemies.

Sharkey argues that, given the lack of any clear delineation between limited automation and killer robots, a hard ban on robots capable of making kill decisions is the only way to ensure that machines never have the power of life and death over human beings.

“Once you’ve put in billions of dollars of investment, you’ve got to use these things,” he said.

Few expect the U.N. meeting this spring to result in an outright ban, but it will begin to lay the groundwork for the role robots will play in war.

Photos: Lockheed Martin, QinetiQ North America, NASA

Cameron Scott
Cameron Scott
Cameron received degrees in Comparative Literature from Princeton and Cornell universities. He has worked at Mother Jones, SFGate and IDG News Service and been published in California Lawyer and SF Weekly. He lives, predictably, in SF.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured