Killer Robots Won’t Go to War If Global Movement Has Its Way

Much ink has been spilled in recent years about the rise of killer robots. A movement to ensure no blood is shed by autonomous weapons—machines that kill or maim without a human behind the joystick or keyboard—is racing to pre-emptively ban the technology before robots go to war.

“We’re talking a wide range of weapons systems with various levels of human control. We’re not just talking about weapons systems but a new way of warfighting,” says Mary Wareham, advocacy director for the Human Rights Watch arms division, who also serves as the global coordinator for the Campaign to Stop Killer Robots.

Human Rights Watch is one of more than 60 nongovernmental organizations (NGOs) that have coalesced around the campaign, which launched in April 2013 with the single-minded goal to “preemptively ban the development, production and use of fully autonomous weapons.”

The coalition includes experts in artificial intelligence, human rights groups, former diplomats and even a group of Nobel Peace Prize Laureates led by Jody Williams (known for her work to ban land mines), from about two dozen countries in what Wareham calls a “truly global campaign” to stop what have been dubbed “lethal autonomous weapons systems” or LAWS.

“We’re trying to get a diverse range of groups around the table because that’s part of building a movement,” she says.

In October, the New York Times ran a lengthy feature on the retooling of America’s military with autonomous weapons systems, which include everything from robotic fighter jets to autonomous submarines capable of stalking targets thousands of miles away without human guidance.

The Times reporters wrote,“The Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semi-autonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.”

The concern by groups that are part of Campaign to Stop Killer Robots isn’t over fear of a dystopian Terminator or Matrix world, but the removal of human control, judgment and conscience from the theater of war.

“We’re debating the nature of human control over the weapon systems and the individual attacks,” Wareham says.

In 2012, the Pentagon released a policy that on the surface seems to imply that some sort of human agency will helm a future robotic army. In part, it says, “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The issue has quickly—in the relative world of government bureaucracy—moved to the forefront of international discourse, according to Wareham. Later this month, at the Fifth Review Conference of the Convention on Conventional Weapons (CCW) at the United Nations in Geneva, delegates are expected to consider a recommendation from a previous meeting in April to establish a Group of Governmental Experts to consider concerns and options relating to LAWS.

That could be the first real step toward an international agreement against killer robots.

The notion that the proverbial Pandora’s Box has already been opened, with killer robots now in various stages of development, doesn’t deter the NGOs. Wareham notes that there is precedent for hope, based on the Protocol on Blinding Laser Weapons, an international agreement involving more than 100 nations that went into effect in 1998. It prohibits the use of laser weapons to blind combatants.

At one point, Wareham says, that technology had also been considered inevitable.

“We’re not going to give up because it’s going to be inevitable, according to some people. We’re going to keep going until we get the ban,” she says.

What is a killer robot?

Wading into the debate on a more philosophical plane is Tero Karppi, an assistant professor of media studies at the University of Buffalo. He and his colleagues recently published a paper in the International Journal of Cultural Studies that analyzes the Campaign to Stop Killer Robots and the larger implications of artificial intelligence for society.

Why is a digital media scholar interested in killer robots? Karppi explains by email to Singularity Hub:

“Automation of process and how this leads to artificial intelligence are the key issues of our contemporary media landscape,” he writes. “While killer robots operate on a global scale, on the public scale we have AI that works in the context of finance, healthcare and social institutions, and on the private scale we have social bots operating on websites, virtual assistants at home and in our smartphones, algorithms recommending stuff on social media platforms.

“The culture is becoming penetrated by systems of artificial intelligence,” he adds. “My interest is what are the cultural impacts of these systems on the scales of global, public and private.”

Karppi says it’s easy to focus on progress without reflecting on what those technological advancements mean to society—and how they mirror who we are. “We have to remember that these technologies are also products of the values and ideas of our current culture,” he says.

Karppi and his co-authors employ a principle used in media theory, called cultural techniques, to explore the theme of killer robots in detail. The approach deconstructs a topic in order to understand how various parts turn into actual systems, products or concepts. It provides insight into the process of becoming—how A, B, C and D got to Z.

“The ban of killer robots may work, if we are able to define what is meant with a killer robot,” Karppi explains. “But what are the essential parts that compose a killer robot? Is it a robot designed by the military? Is it a robot that is autonomous and operates without human intervention? I think there are still a lot of open questions. What does human intervention in this case mean? In the paper we try to deconstruct what killer robots are into smaller elements and techniques.”

For Wareham, whatever ambiguity might surround terms like “killer robot” or “meaningful human control,” which the NGOs have pushed as a measure of where to draw the line in the computer code, is secondary to global action on this issue sooner rather than later.

“We’re not in the popular campaigning phase yet. I keep saying to the government, ‘this is how far we’ve gotten with this little money and this much interest. Just imagine when we’re able to turn this into a really big global movement,’” Wareham says. “We’re only going to get bigger the longer they take to deal with this. It’s really in their interest to deal with this now rather than down the road when we have a really big movement to stop killer robots.”

Image Credit: Shutterstock

Peter Rejcek
Peter Rejcekhttps://www.peterrejcek.com/
Formerly the world’s only full-time journalist covering research in Antarctica, Peter became a freelance writer and digital nomad in 2015. Peter’s focus for the last decade has been on science journalism, but his interests and expertise include travel, outdoors, cycling, and Epicureanism (food and beer). Follow him at @poliepete.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured