Google’s AI-Building AI Is a Step Toward Self-Improving AI

Reaching the technological singularity is almost certainly going to involve AI that is able to improve itself. Google may have now taken a small step along this path by creating AI that can build AI.

Speaking at the company’s annual I/O developer conference, CEO Sundar Pichai announced a project called AutoML that can automate one of the hardest parts of designing deep learning software: choosing the right architecture for a neural network.

The Google researchers created a machine learning system that used reinforcement learning—the trial and error approach at the heart of many of Google’s most notable AI exploits—to figure out the best architectures to solve language and image recognition tasks.

Not only did the results rival or beat the performance of the best human-designed architectures, but the system made some unconventional choices that researchers had previously considered inappropriate for those kinds of tasks.

The approach is still a long way from being practical, the researchers told MIT Tech Review, as it tied up 800 powerful graphics processors for weeks. But Google is betting that automating the process of building machine learning systems could help get around the shortage of human-machine learning and data science talent that is slowing the technology’s adoption.

It’s not the only one. Facebook engineers have built what they like to call an “automated machine learning engineer,” according to Wired. It’s also called AutoML and can choose algorithms and parameters that are most likely to solve the problem at hand.

Last summer, the AutoML challenge saw teams go head-to-head to build machine learning “black boxes” that can select models and tune parameters without any human intervention. Even game designers are in on the act—the team behind the hit game Space Engineers has used some of their profits to set up a team of experts to design AI able to optimize its own hardware and software.

While this kind of automation could make it easier for non-experts to design and deploy AI systems, it also seems to be laying the foundation for machines that can take control of their own destiny.

The concept of “recursive self-improvement” is at the heart of most theories on how we could rapidly go from moderately smart machines to AI superintelligence. The idea is that as AI gets more powerful, it can start modifying itself to boost its capabilities. As it makes itself smarter it gets better at making itself smarter, so this quickly leads to exponential growth in its intelligence.

Generally, the so-called “seed AI” is envisaged as an artificial general intelligence (AGI), a machine that is able to carry out any intellectual task a human could rather than being a specialist in a very specific area, like most of today’s algorithms are.

The systems being worked on today are clearly a long way from AGI, and they are directed at building and improving other machine learning systems rather than themselves. Outside of machine learning, self-modifying code has been around for a while, but it would likely be far more complex to deploy this technique to edit neural networks.

But creating algorithms able to work on machine learning code is clearly a first step towards the kind of self-improving AI envisaged by futurists.

Other recent developments could also feed in this direction. Many AI researchers are trying to encode curiosity and creativity into machine learning systems, both traits likely to be necessary for a machine to redesign itself in performance-boosting ways. Others are working on allowing robots to share the lessons they’ve learned, effectively turning them into a kind of hive mind.

Doubtless, it will be a long time before any of these capabilities reach the stage where they can be usefully employed to create a self-improving AI. But we can already see the technological foundations being laid.

Image Credit: Pond5

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured