Will Advanced AI Be Our Final Invention?

Don't_Panic_AI_Final_Invention (1)

It seems these days that no sooner do you get out the words “AI risk” then someone snaps back “Skynet.” You mention “computer takeover” and they blurt “Hal 9000.” “Intelligence explosion” is greeted with “The Forbin Project!” and “normal accidents” with “MechaGodzilla!”

In other words, you can’t seriously discuss problems that might arise with managing advanced AI because Hollywood got there first and over-exercised every conceivable plot line. There’s plenty of tech fear out there thanks to Hollywood, but there’s also the tendency to imagine we’re immune to AI risk because it’s been in a movie, so, ipso facto it’s fantasy.

While it’s tempting to seek solace in this line of reasoning, the experts who are actually working on artificial intelligence have something else to say. Many point to a suite of looming problems clustered around the complexity of real-world software and the inherent uncontrollability of intelligence.

For example, Danny Hillis, of Thinking Machines, Inc., thinks we’ve entered a kind of evolutionary machine feedback loop, where we use complex computers to design computers of even greater complexity — and the speed of that process is outpacing our ability to understand it.

Hillis writes, “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoebas, and we can’t figure out what the hell this thing is that we’re creating.”

Even Ray Kurzweil, who was recently hired by Google to take search into cognitive realms, thinks machines will “evolve beyond humans’ ability to control or even understand them.”

Fully autonomous (circa 2019)
Fully autonomous (circa 2019)

Complex? Opaque? Check! Then there’s the weaponization angle. While modestly funded academics at MIRI and FHI work on AI safety, big dollars are flowing in the opposite direction, towards human-killing AI. Autonomous kill drones and battlefield robots are on the drawing boards and in the lab, if not yet on the battlefield. Humans will be left out of the loop, by design.

Are they friendly? Safe? The Pentagon and stockholders won’t be a bit pleased if these killing machines turn out to be either.

So, in an environment where high-speed algorithms are battling it out in Wall Street flash crashes, and 56 nations are developing battlefield robots, is a book about bad outcomes from advanced AI premature, or right on time?

The recently released, Our Final Invention: Artificial Intelligence and the End of the Human Era, by documentary filmmaker James Barrat, offers frank and sometimes raw arguments why the time is now, or actually, yesterday, to move the AI-problem conversation into the mainstream.

The problem isn’t AI, Barrat argues, it’s us.

Technological innovation always runs far ahead of stewardship. Look at nuclear fission. Splitting the atom started out as a way to get free energy, so why did the world first learn about fission at Hiroshima? Similarly, Barrat argues, advanced AI is already being weaponized, and it is AI data mining tools that have given the NSA such awesome powers of surveillance and creepiness.

In the next decades, when cognitive architectures far more advanced than IBM’s Watson achieve human level intelligence, watch out —“Skynet!” No, seriously. Barrat makes a strong case for developmental problems on the road to AGI, and cataclysmic problems once it arrives.

Adorable AI Overlords FTW!
Adorable AI Overlords FTW!

Imagine a half dozen companies and nations fielding computers that rival or surpass human intelligence, all at about the same time. Imagine what happens when those computers themselves become expert at programming smart computers. Imagine sharing the planet with AIs thousands or millions of times more intelligent than we are. And all the while the deepest pockets are weaponizing the best of this technology.

You can skip coffee this week — Our Final Invention will keep you wide-awake.

Barrat’s book is strongest when it’s connecting the dots that point towards a dystopian runaway-AI future and weakest when it seeks solutions. And maybe that’s the point.

The author doesn’t give Kurzweil the space and deference he normally gets as the singularity’s elder statesman, and the pitchman for an ever-lasting tomorrow. Instead Barrat faults Kurzweil and others like him for trumpeting AI’s promise while minimizing its peril, when they know that something fallible and dangerous lurks behind the curtain. Kinda like the Wizard of Oz.

Image Credit: Sarabbit/FlickrUS Air Force/Staff Sgt. Brian Ferguson/Wikimedia Commons, LaMenta3/Flickr

Louie Helm
Louie Helmhttp://rockstarresearch.com/
Executive Editor, Rockstar Research Magazine
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured