artificial-intelligence-space-invaders

Remember Space Invaders? The arcade game and later Atari hit pitted a lone pixellated laser cannon against a swarm of equally pixellated descending aliens. Maybe you enjoyed the game occasionally, or maybe you stayed up all night seeking mastery.

If it was the latter—you now share that experience with a machine.

In a recent interview, Demis Hassaabis—founder of artificial intelligence firm DeepMind, acquired last year by Google for over $500 million—talked AI and showed video of one of his group's deep learning algorithms killing it at Space Invaders.

Hassaabis says artificial intelligence is the science of making intelligent machines. There are two ways to do that—pre-program special solutions that the machine automatically executes or make a machine with no particular skills but the general ability to learn from its experiences and incoming environmental information.

Most intelligent programs to date are of the former category. The programs of the future, programs like those being developed by DeepMind and others, will be able to learn on the fly and improve their skills without further human intervention.

Early demonstrations of these learning algorithms are simple. DeepMind is famous for its video game playing programs, for example. And they are cool. Hassaabis says the software in the video went from terrible to superhuman in about eight hours of play.

But Google didn’t pay half a billion dollars for a 70s-era AI gamer.

Perhaps the most obvious reason they acquired DeepMind is the technology’s potential to improve search of text or images. And even that is likely too narrow in the longer run.

Google’s current fleet of self-driving cars, for example, are pretty amazing. But they don’t learn. They aren’t flexible in a world of endless variety, relying instead on programmers to account for as many situations as they can. A Sisyphean task.

That isn’t to say we can’t get a high degree automation without deep learning. But a fully self-driving car will likely require more flexibility on the fly than is possible now.

Or consider Google’s acquisition of eight robotics firms last December. Robots will remain glorified Roombas until they can learn and interact with their surroundings. Perhaps Google will pair deep learning with future robots in the factory or home.

Hassaabis, at least, thinks we’ll see personal robots and self-driving cars in the next five, ten, or fifteen years. But he’s even more excited when he imagines what happens when powerful artificial intelligence programs start tackling the biggest challenges.

“Macroeconomics, climate change, disease, energy—the science of these comes down to crunching masses of information. It’s too much for even very smart human scientists to fully understand. We’re probably missing things. I think we need aids like artificial intelligence technology to…make better use of this data for the good of society.”

Image Credit: Shutterstock.com

Jason is managing editor of Singularity Hub. He cut his teeth doing research and writing about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.