Controversy Brews Over Role Of ‘Killer Robots’ In Theater of War

172 11 Loading

mfc-amas-photo-01-hTechnology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas.

As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. Several military powers, including the United States, the United Kingdom, Israel and China, are already using partially autonomous weapons in combat and are almost certainly pursuing other advances in private, according to experts.

The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. The humanoid robots Google recently bought are neat, but most machines being used or tested by national militaries are, for now, more like robotic weapons than robotic soldiers. Still, the line between useful weapons with some automated features and robot soldiers ready to kill can be disturbingly blurry.

Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Their use in war would likely save lives in the short run, but many worry that they would also result in more armed conflicts and erode the rules of war — and that’s not even considering what would happen if the robots malfunctioned or were hacked.

Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. And the U.N., driven in part by a 2013 report by Special Rapporteur Christof Heyns, has set a meeting in May for nations to explore that and other limits on the technology.

“Robots should not have the power of life and death over human beings,” Heyns wrote in the report.

There’s no doubt that major military powers are moving aggressively into automation. Late last year, Gen. Robert Cone, head of the U.S. Army’s Training and Doctrine Command, suggested that up to a quarter of the service’s boots on the ground could be replaced by smarter and leaner weaponry. In January, the Army successfully tested a robotic self-driving convoy that would reduce the number of personnel exposed to roadside explosives in war zones like Iraq and Afghanistan.

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

MAARS-robotWhat if, for example, a robot tasked with destroying an unmanned military installation instead destroyed a school? Robotic sensing technology can only barely identify big, obvious targets in clutter-free environments. For that reason, the open ocean is the first place robots are firing on targets. In more cluttered environments like the cities where most recent wars have been fought, the sensing becomes less accurate.

The U.S. Department of Defense directive, which insists that humans make kill decisions, nonetheless addresses the risk of “unintended engagements,” as a spokesman put it in an email interview with Singularity Hub.

Sensing and artificial intelligence technologies are sure to improve, but there are some risks that military robot operators may never be able to eliminate.

Some issues are the same ones that plague the adoption of any radically new technology: the chance of hacking, for instance, or the legal question of who’s responsible if a war robot malfunctions and kills civilians.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’s MANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

NASA is building humanoid robots.

NASA is building humanoid robots.

Or consider a real scenario. The U.S. sometimes programs its semi-autonomous drones to locate a terrorist based on his cell phone SIM card. The terrorists, knowing that, often offload used SIM cards to unwitting civilians. Would an autonomous killing machine be able to plan for such deception? Even if robots plan for particular deceptions, the history of the web suggests that terrorists could find others.

Of course, most technologies stumble at first and many turn out okay. The militaries developing war-fighting robots are assuming this model and starting with limited functions and use cases. But they are almost certainly working toward exploring disruptive options, if only to keep up with their enemies.

Sharkey argues that, given the lack of any clear delineation between limited automation and killer robots, a hard ban on robots capable of making kill decisions is the only way to ensure that machines never have the power of life and death over human beings.

“Once you’ve put in billions of dollars of investment, you’ve got to use these things,” he said.

Few expect the U.N. meeting this spring to result in an outright ban, but it will begin to lay the groundwork for the role robots will play in war.

Photos: Lockheed Martin, QinetiQ North America, NASA

Discussion — 11 Responses

  • palmytomo March 9, 2014 on 2:20 pm

    Excellent article, thanks. I hope it travels far. Two problems of misuse of military (and other) force seem to be…
    (a) covertness that leads to snowballing loss of checks-and-balances
    (b) remoteness from the vulnerable, so that military staff are devoid of empathy.
    Imagine if the people presently creating and remote controlling killer drones were required after the hit to change out of their we-just-do-what-we’re-told-to uniforms, dress in ordinary clothes, live with the families of the ‘accidentally’ bereaved to witness the anguish they cause. Bruce Thomson in New Zealand.

    • rudyilis palmytomo March 9, 2014 on 6:07 pm

      Drone pilots report PTSD, so they already are experiencing the effects of killing without us having to imagine scenarios like that.

      • palmytomo rudyilis March 10, 2014 on 2:51 pm

        That’s interesting, and sad.
        – In the end it comes down to how much we want killing to be the way to get ahead, or cooperating to optimise use of human resources.
        – I think killing opposition is a universally favoured and successful way of getting ahead (all species do it).
        – But a major factor is destruction of the global habitat: Wars may reduce population (retaining carrying capacity of the planet), but they have been doing huge immediate-survival-justified physical destruction, of infrastructure that will have to be rebuilt (consuming huge amounts of fossil fuels & other resources). Bruce Thomson in New Zealand.

        • daneel333 palmytomo March 11, 2014 on 3:18 am

          “they have been doing huge immediate-survival-justified physical destruction, of infrastructure that will have to be rebuilt (consuming huge amounts of fossil fuels & other resources).”

          And that’s precisely why it’s done! One of the ‘perks’ of war along with protection of various fuel pipelines and other resources.

          But, back to autonomous weapons… don’t we already have automated missile defense systems and a still unresolved problem with land mines?

          The autonomy part is nothing new really when it comes to humans finding ways to kill or maim other humans. Set and forget traps – like mines or remotely operated IEDs have been around for a while and continue to be used.

          An autonomous mobile humanoid robot patrolling streets in an occupied zone may one day end up being the more desirable option once its ‘intelligence’ is up to the job. Of course, that doesn’t diminish the scope for abuse and the occasional deal-breaking software glitch.

          Meanwhile, we’re starting to think about entrusting our lives to self driving cars with the excuse that the number of fatal accidents will be reduced. The accidents just won’t be our fault.

  • Andrew Atkin March 9, 2014 on 9:29 pm

    Imagine an urban war being carried out by “killer robots” that only kill after making a positive identification on a target, using face-recognition software, etc. It could, ultimately, drastically reduce civil causalities.

    We talk as though humans don’t make mistakes. In the end, we would be far better off leaving it to robots…and do you really think that a solider is not also pretty much a programmed robot, only much more sadistic (on average), reactive and error prone?

    Forget the robot Versus human debate. Focus of safety and accuracy. Focus on results and do whatever makes the most sense. And don’t think in terms of silly Arnold Schwarzenegger movies.

    • r rands Andrew Atkin March 11, 2014 on 1:32 pm

      ” … far better off leaving it to robots.”

      Leaving exactly what, to robots? Do you think that lethal autonomous drones are always going to be acting on policies that suit your beliefs?

      I suggest you think carefully about all the possible scenarios, including those where you are not unmistakeably one of the “good guys”. There’s nothing to say your tax dollars are always going to pay for your personal safety or the protection of those you care most about.

      And then there’s the larger puzzle of why someone thinks it’s OK to invent yet another gee-whiz, you-beaut technological innovation, reinforcing a prehistoric approach to conflict among humans.

      What was it Einstein said? Oh yes. something like:

      “The splitting of the atom changed everything, save man’s mode of thinking. Thus we drift towards unparalleled catastrophe”

      I don’t look forward to a world (where I hopefully will not be), where automated killing machines traverse radioactive landscapes, urban or rural. What a shame the masters of war have no interest in diplomatic innovation.

      • Andrew Atkin r rands March 24, 2014 on 1:55 pm

        We already have nuclear weapons. The cat is out of the bag. The crying is over the spilt milk.

        Robots allow us to do the same job we’ve been doing since the beginning of time, but in a more surgical and ultimately less devastating manner.

  • Nolux March 10, 2014 on 6:06 am

    Also consider that autonomous robot combatants could function without weapons. Imagine a soft robot covered with a bullet proof outer layer that could disarm and apprehend a suspect and not feel the need to retaliate when it was shot. It wouldn’t feel anything, anger, fear, frustration etc.

  • Nolux March 11, 2014 on 4:37 am

    I wouldnt call reduced fatalities an excuse. You obviously haven’t seen my mom drive – I’d rather a robot drive for her. Also, how are unarmed robots sad? They would be programmed for protection and would take a bullet to protect you. I don’t think many soldiers or police officers would do that. Casualties are messy and instigate more violence so it makes sense that future military robots with advanced AI would be all about defusing situations, EX. No weapons.

  • Nolux March 12, 2014 on 3:00 am

    Unless you’re talking about some malevolent AI i don’t understand what you’re taking about. machines murdering people? To what end? Humans do a lot of random murdering now but keep in mind far less than in the past. As we learn to communicate better and understand each other we are becoming a less violent society and we reflect in our machines and hopefully in our future AIs.