Killer Robots Are Coming – AI Experts To Determine Their Threat Toward Humanity

It’s no secret that in the future robots — not all, but certain ones — will be designed to kill. Certainly, we’ll have service bots a plenty that are as safe as golden retrievers, but there’s no denying that robots will also have the capacity to be lethal with intent, that is, if they are designed to do so.

We know this because militaries around the world are looking to robots to reduce harm to soldiers and citizens. The US military, for one, has replaced a significant portion of manned aircraft with unmanned aerial vehicles, to the tune of 30+ percent. Furthermore, military funding is currently fueling the development of robots that can play various roles in the theater of war, whether it is in a support role like Boston Dynamics’ Alpha Dog, for defense such as South Korea’s robotic turret, or in spying, like the 110 FirstLook mini tank from iRobot.

So the recent rapid developments in the field of robotics beg a question: do we need to be concerned about future robots autonomously killing some, if not, all the humans on earth?

It’s a legitimate question that has been kicked around in both the science fiction and scientific disciplines for years, with some arguing it is an inevitability while others say that humans will be able to always maintain control. Now, a joint initiative between a philosopher, a scientist, and a co-founder of Skype are planning to take their futuristic risk assessment up a notch. With the goal of launching next year, the Center for the Study of Existential Risk at Cambridge University will be dedicated to considering the rise of artificial intelligence and its potential to create the most feared doomsday scenarios.

Though the threat is still years off (in all likelihood), center co-founder and philosophy professor Huw Price feels that these issues need to be wrestled with now. As he told the Associated Press, “we’re no longer the smartest things around.” He added, “In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”

While the members of the think tank-like center could end up musing about the meaning of being human in the face of technology instead of making serious study out of the threat, the founders are committed to making a world-class center of great intellectual power. Additionally, the proposal is for researchers to engage in multidisciplinary, scientific investigations to ensure that that “our own species has a long-term future,” as the center’s page describes.

This kind of research can fill a safety and security gap that most would assume someone out there is addressing, but the rate of technological change is so great that it is difficult to stay on top of fields as broad as robotics and artificial intelligence (as readers of Singularity Hub are well aware).

For some, the robot threat is much more present than it is for many many developed countries. In fact, a 50-page report title “Losing Humanity: The Case Against Killer Robots” by the Human Rights Watch group already addresses the issue of autonomous drones and calls for a ban against their development. Steve Goose, the group’s division director, told The Guardian, “Giving machines the power to decide who lives and dies on the battlefield would take technology too far.”

Check out this short video that was put out by the group to address this issue:

In response to the recent interest in this issue, the Pentagon made a policy directive that behind every drone there must be a human being making decisions.

Though this policy is reasonable now, one wonders if this will always be the case as the inevitable use of robots in the military could escalate quickly and as developments in military drones come quickly. Just this past summer, the X-47B robot fighter completed its first phase of testing aimed at taking off and landing from an aircraft carrier completely autonomously.

Then there’s one scenario that often comes up: rogue countries or developers creating completely autonomous killer bots and unleashing them onto the world. How feasible is this really? That question has not been rigorously answered, which is exactly why a Center like the one being proposed is necessary.

Those who are at the cutting edge of technology are rarely in a position to question the ethics of what they are bringing into the world until it is too late. Having expert researchers dedicated to studying these breakthrough technologies and assessing their threat to the human race is imperative.

In truth, one center is not even close to being enough, but we have to start somewhere.

Let’s be clear: neither killer robots nor the debate about them are going away anytime soon, but fortunately the risk they actually pose can start to be investigated more rigorously in hopes that artificial intelligence can be understood and corralled for the safety of all.

[featured image credit: Newhaircut on flickr]

David J. Hill
David J. Hill
David started writing for Singularity Hub in 2011 and served as editor-in-chief of the site from 2014 to 2017 and SU vice president of faculty, content, and curriculum from 2017 to 2019. His interests cover digital education, publishing, and media, but he'll always be a chemist at heart.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured