As the Powerful Argue AI Ethics, Might Superintelligence Arise on the Fringes?

Last year, Elon Musk and Stephen Hawking admitted they were concerned about artificial intelligence. While undeniably brilliant, neither are AI researchers. Then this week Bill Gates leapt into the fray, also voicing concern—even as a chief of research at Microsoft said advanced AI doesn’t worry him. It’s a hot topic. And hotly debated. Why?

In part, it’s because tech firms are pouring big resources into research. Google, Facebook, Microsoft, and others are making rapid advances in machine learning—a technique where programs learn by interacting with large sets of data.

But it’s here that a critical distinction should be made. Machine learning is what’s called ‘narrow artificial intelligence’. Machine learning programs that can identify discrete features in images, for example, are being used to analyze images of tissue for the presence of cancer. Those Amazon and Netflix recommendation systems are a form of narrow AI. Google search learns from its interactions with users to improve search results.

The debate Musk, Hawking, and Gates are wading into is about the future of AI (just how futuristic is also controversial) when general AI emerges. General artificial intelligence would match and then (maybe very quickly) exceed human intelligence. It is, in fact, an old and oft-recurring debate with newly fresh legs.

In his book Superintelligence, released last year, Nick Bostrom argues that there are good reasons to believe artificial superintelligence could be very alien, very powerful, and as it seeks to achieve its goals, could wipe human beings out.

Bostrom goes on to say that AI, ironically, may offer the best safeguard.

We aren’t smart enough to train an AI—but it could train itself. “The idea is to leverage the superintelligence’s intelligence, to rely on its estimates of what we would have instructed it to do.”

Though most experts agree artificial intelligence research should be pursued carefully—and in fact, many also believe general AI may emerge this century—Bostrom’s argument isn’t universally accepted. And we won’t resolve the debate here. But it’s the weekend, so maybe a sci-fi short film on the topic would be more entertaining.

Director Henry Dunham’s “The Awareness” summons up dark visions of Skynet and Terminator. And it notes that while the powerful publicly debate ethics and safety, they can’t prevent or control advances being made on the fringes. That’s the beauty and terror of democratized digital technology. Set in a dark and grimy warehouse, the offices of a struggling tech startup, the lead programmer sums it up when he says: “I created the future on a $30 table.”

Image Credit:

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
Don't miss a trend
Get Hub delivered to your inbox