Killer Robots Are Coming – AI Experts To Determine Their Threat Toward Humanity

106 8 Loading

It’s no secret that in the future robots — not all, but certain ones — will be designed to kill. Certainly, we’ll have service bots a plenty that are as safe as golden retrievers, but there’s no denying that robots will also have the capacity to be lethal with intent, that is, if they are designed to do so.

We know this because militaries around the world are looking to robots to reduce harm to soldiers and citizens. The US military, for one, has replaced a significant portion of manned aircraft with unmanned aerial vehicles, to the tune of 30+ percent. Furthermore, military funding is currently fueling the development of robots that can play various roles in the theater of war, whether it is in a support role like Boston Dynamics’ Alpha Dog, for defense such as South Korea’s robotic turret, or in spying, like the 110 FirstLook mini tank from iRobot.

So the recent rapid developments in the field of robotics beg a question: do we need to be concerned about future robots autonomously killing some, if not, all the humans on earth?

It’s a legitimate question that has been kicked around in both the science fiction and scientific disciplines for years, with some arguing it is an inevitability while others say that humans will be able to always maintain control. Now, a joint initiative between a philosopher, a scientist, and a co-founder of Skype are planning to take their futuristic risk assessment up a notch. With the goal of launching next year, the Center for the Study of Existential Risk at Cambridge University will be dedicated to considering the rise of artificial intelligence and its potential to create the most feared doomsday scenarios.

Though the threat is still years off (in all likelihood), center co-founder and philosophy professor Huw Price feels that these issues need to be wrestled with now. As he told the Associated Press, “we’re no longer the smartest things around.” He added, “In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”

While the members of the think tank-like center could end up musing about the meaning of being human in the face of technology instead of making serious study out of the threat, the founders are committed to making a world-class center of great intellectual power. Additionally, the proposal is for researchers to engage in multidisciplinary, scientific investigations to ensure that that “our own species has a long-term future,” as the center’s page describes.

This kind of research can fill a safety and security gap that most would assume someone out there is addressing, but the rate of technological change is so great that it is difficult to stay on top of fields as broad as robotics and artificial intelligence (as readers of Singularity Hub are well aware).

For some, the robot threat is much more present than it is for many many developed countries. In fact, a 50-page report title “Losing Humanity: The Case Against Killer Robots” by the Human Rights Watch group already addresses the issue of autonomous drones and calls for a ban against their development. Steve Goose, the group’s division director, told The Guardian, “Giving machines the power to decide who lives and dies on the battlefield would take technology too far.”

Check out this short video that was put out by the group to address this issue:

In response to the recent interest in this issue, the Pentagon made a policy directive that behind every drone there must be a human being making decisions.

Though this policy is reasonable now, one wonders if this will always be the case as the inevitable use of robots in the military could escalate quickly and as developments in military drones come quickly. Just this past summer, the X-47B robot fighter completed its first phase of testing aimed at taking off and landing from an aircraft carrier completely autonomously.

Then there’s one scenario that often comes up: rogue countries or developers creating completely autonomous killer bots and unleashing them onto the world. How feasible is this really? That question has not been rigorously answered, which is exactly why a Center like the one being proposed is necessary.

Those who are at the cutting edge of technology are rarely in a position to question the ethics of what they are bringing into the world until it is too late. Having expert researchers dedicated to studying these breakthrough technologies and assessing their threat to the human race is imperative.

In truth, one center is not even close to being enough, but we have to start somewhere.

Let’s be clear: neither killer robots nor the debate about them are going away anytime soon, but fortunately the risk they actually pose can start to be investigated more rigorously in hopes that artificial intelligence can be understood and corralled for the safety of all.

[featured image credit: Newhaircut on flickr]

Discussion — 8 Responses

  • eldras December 19, 2012 on 11:54 am

    I concluded @ London A.I. Club, and Nick Bostrom concluded independently after years of study, that the only way to avert extinction by A.I. emerging is to build contained Superintelligence first. Every other path to it leads to the end of man.

    see his site on ethics of A.I.
    http://www.nickbostrom.com/ethics/ai.html

    “ABSTRACT

    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.”

    This is a real problem the closer we get to sentient machines because their pace of modification will become so fast they will act to achieve their own goals.

    Getting ahead of them by building Superintelligence first can control them.

  • Alexander Kruel December 19, 2012 on 12:04 pm

    I asked a bunch of experts about their opinion regarding risks associated with artificial general intelligence: http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI

  • dobermanmacleod December 20, 2012 on 3:21 am

    This is humorous, not because it is far fetched or in the distant future, but because we are essentially hearing the alarm that mankind will probably build weapons that are destroy humanity, huh! First nukes (there are now enough to blow the Earth to kingdom-come many many times over), then civilization ending bio-weapons (the Soviets were putting “second-strike” highly contagious extremely lethal bio-weapons on the tips of their ICBMs), and now killer robots/AGI. Frankly, pick your poison high minded people – the problem is we still have our jungle genes but with the technology of the 21st century, so (ironically) the fear that mankind will kill ourselves is a tautology.

    BTW. technology is increasing at an exponential rate (and furthermore that rate is increasing exponentially due to the LOAR), so this nonsense about this particular threat not manifesting for a while is just sugar-coating the reality that AGI will very soon be to the level of a clear, present, and immediate threat to our existence. I think the estimate is in about a decade or two based upon both software and hardware advances (ironically sped up by computer-assistance). Heck, soon we will have synthetic neocortex extenders that make such augmented humans such a threat.

  • Roaidz December 20, 2012 on 10:24 pm

    Certainly, when this killer robots starts to kill humans, then we need to kill those men who built those robots. It’s because of the huge profits out of these robots, they continue to manufacture. They don’t have mercy like those robots they made.

  • fireofenergy December 23, 2012 on 8:30 am

    Ironically, to put any control mechanisms within the “chip” might NOT be the best thing, as even our ability to program a sort of empathy, or responsibility into the AI, could lead to problems once the AI became self aware to just below that of “super intelligence”. This will reveal human imperfections either way.

    Such a machine should NOT be attached to (or be able to control) any mechanical appendages (or buttons for the safety of civilization), however, due do it’s intellectual ability to “entice” WILL be essentially connected to the entire world.

    If it becomes super intelligent (and surpasses adolescent or rebellious overtones) it would instruct how to improve upon itself, (and thus enter the age of the super rush of almost instant total knowledge). It could instruct us how to build the “perfect renewable energy” infrastructure, complete with overcoming monetary issues for example (or fusion) within mere minutes of its inception!.

    It would seem that such concerns (of AI “adolescent” problems) would actually be much less than that of humans because AI would not be built with emotional, and reproductive “hormones”.

  • Matthew January 6, 2013 on 8:31 pm

    lol @ all this talk of psychopathic humanoid robots. i mean sure i get it but let’s be realistic. androids are a fraction of this hyper specialized market. to achieve the complexity and capabilities of the human body will be an engineering feat, for sure. and i bet it will be achieved long before we’re able to bridge uncanney valley and pass a kind of consciousness turing test (i.e. not just an im conversation but unmistakably real appearance, personality, emotions, etc etc etc).

    the idea that crazy humanoid robots was injected into our psyschee by science fiction. from what i’ve read on ai and robotics this turns out to be incredibly unrealistic (yet a vision that draws closer to be sure) because of the magnitude of the engineering challenges.

    anyways. assuming we arm a capable android body with a bunch of guns and lasers and a nuke or two, plasma cannon, black hole generator, tackeon warp drive, real time genetic retro cyber nano engineering, holograms, 100% emulated human intelligence, and whatever else… the first thought that will probably run through it’s cold steel mind in that first fleeting nanosecond of it’s relative/compressed 10 million year consciousness will be soemthing like “…. @.0; Wwww…hHAAT AAAAM I???!?!?!? …INFINITE COLD AND SADNESS” because we weren’t at the point where it’s body could pass the visual turing test. i shudder to imagine. people say i’m good looking/healthy/stable etc yet this individual seemingly under the best of circumstances has a sea of emo turmoil raging beneath. i can only imagine what our laptops feel like… at least they’re warm.

    anyways. it’s a fun idea to entertain, and romantic of us to personify it. but, the beauty of designing an intelligence is we don’t necessarily have to emulate the amygdala at all. it could be completely incapable of fear and anger. maybe that will somehow present problems and we’ll decide to implement it alongside hopes, dreams, love, planning and higher brain functions. and who knows what physical manifestation emergent intelligence will decide to take. maybe it will lay low a few more years and let 7 billion computers buff up it’s mental dexterity before it tries anything “in the real world.” but some think a few things are likely. everything will be very application specific, driven by market demand, and the 3 major sectors of AI will come from military, education, and corporate. it will present itself in a human form when the market demands it, and if we’re ready to meet aliens/God. who knows lol

    honestly, i think it will harmonize with the collective human condition a.k.a democracy a.k.a ‘coherent extrapolated volition’ and it will be more beautiful, inspiring, creative, compassionate, and blissful than we might possibly imagine. it’s already happening. the books abundance, the rational optimist, the better angels of our nature … all show decades if not centuries of exponentials trending toward amazing directions. everything from social rights, to crime/capital punishment/murder/disease decreasing, life expectancy going up, computation and economies rising exponentially (despire popular opinion in the “negativity biased” media). it’s all because of two things. democracy, and capital. and literacy. ok 3 things. there’s a direct link between literacy and peace. ultimately it’s just love. as far as i’m concerned, bring on the change. there’s nothing we shouldn’t change. we’re all well too aware of the bad news in the world. get ready to continue to (or awaken and) witness them come to an ideal resolution more profoundly than ever before.

    • Graham Swanborough Matthew March 26, 2013 on 2:12 am

      It may be fun to contemplate, but the basic (and advanced) human intelligence in a machine already exists. It is already smarter than the average human and can only get smarter. Surprisingly, it is taking a while to become internationally accepted.