Google Chases General Intelligence With New AI That Has a Memory

For a mind to be capable of tackling anything, it has to have a memory.

Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task and call it quits. To learn a new task, it has to reset, wiping out previous memories and starting again from scratch.

This phenomenon, quite aptly dubbed “catastrophic forgetting,” condemns our AIs to be one-trick ponies.

Now, taking inspiration from the hippocampus, our brain’s memory storage system, researchers at DeepMind and Imperial College London developed an algorithm that allows a program to learn one task after another, using the knowledge it gained along the way.

When challenged with a slew of Atari games, the neural network flexibly adapted its strategy and mastered each game, while conventional, memory-less algorithms faltered.

“The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence,” writes the team in their paper, which was published in the journal Proceedings of the National Academy of Sciences.

“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” says study lead author Dr. James Kirkpatrick, adding that the study overcame a “significant shortcoming” in artificial neural networks and AI.

Making Memories

This isn’t the first time DeepMind has tried to give their AIs some memory power.

Last year, the team set their eyes on a kind of external memory module, somewhat similar to a human working memory—the ability to keep things in mind while using them to reason or solve problems.

Combining a neural network with a random access memory (better known as RAM), the researchers showed that their new hybrid system managed to perform multi-step reasoning, a type of task that’s long stumped conventional AI systems.

But it had a flaw: the hybrid, although powerful, required constant communication between the two components—not an elegant solution, and a total energy sink.

In this new study, DeepMind backed away from computer storage ideas, instead zooming deep into the human memory machine—the hippocampus—for inspiration.

And for good reason. Artificial neural networks, true to their name, are loosely modeled after their biological counterparts. Made up of layers of interconnecting neurons, the algorithm takes in millions of examples and learns by adjusting the connection between the neurons—somewhat like fine-tuning a guitar.

A very similar process occurs in the hippocampus. What’s different is how the connections change when learning a new task. In a machine, the weights are reset, and anything learned is forgotten.

In a human, memories undergo a kind of selection: if they help with subsequent learning, they become protected; otherwise, they’re erased. In this way, not only are memories stored within the neuronal connections themselves (without needing an external module), they also stick around if they’re proven useful.

This theory, called “synaptic consolidation,” is considered a fundamental aspect of learning and memory in the brain. So of course, DeepMind borrowed the idea and ran with it.

Crafting an Algorithm

The new algorithm mimics synaptic consolidation in a simple way.

After learning a game, the algorithm pauses and figures out how helpful each connection was to the task. It then keeps the most useful parts and makes those connections harder to change as it learns a new skill.

“[This] way there is room to learn the new task but the changes we’ve applied do not override what we’ve learned before,” says Kirkpatrick.

Think of it like this: visualize every connection as a spring with different stiffness. The more important a connection is for successfully tackling a task, the stiffer it becomes and thus subsequently harder to change.

“For this reason, we called our algorithm Elastic Weight Consolidation (EWC),” the authors explained in a blog post introducing the algorithm.

Game On

To test their new algorithm, the team turned to DeepMind’s favorite AI training ground: Atari games.

Previously, the company unveiled a neural network-based AI called Deep Q-Network (DQN) that could teach itself to play Atari games as well as any human player. From Space Invaders to Pong, the AI mastered our nostalgic favorites, but only one game at a time.

“After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player.”

The team now pitted their memory-enhanced DQN against its classical version, and put the agents through a random selection of ten Atari games. After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player.

In stark contrast, without the memory boost, the classical algorithm could barely play a single game by the end of training. This was partly because the AI never learned to play more than one game and always forgot what it had learned when moving on to a new one.

“Today, computer programs cannot learn from data adaptively and in real time.

We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks,” the authors say.

Machine Brain

That’s not to say EWC is perfect.

One issue is the possibility of a “blackout catastrophe”: since the connections in EWC can only become less plastic over time, eventually the network saturates. This locks the network into a single unchangeable state, during which it can no longer retrieve memories or store new information.

That said, “We did not observe these limitations under the more realistic conditions for which EWC was designed—likely because the network was operating well under capacity in these regimes,” explained the authors.

Performance wise, the algorithm was a sort of “jack-of-all-trades”: decent at plenty, master of none. Although the network retained knowledge from learning each game, its performance for any given game was worse than traditional neural networks dedicated to that one game.

One possible stumbling block is that the algorithm may not have accurately judged the importance of certain connections in each game, which is something that needs to be further optimized, explain the authors.

“We have demonstrated that [EWC] can learn tasks sequentially, but we haven’t shown that it learns them better because it learns them sequentially,” says Kirkpatrick. “There’s still room for improvement.”

“The team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence.”

But the team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence, in which AIs achieve the kind of adaptive learning and reasoning that come to humans naturally.

What’s more, the work could also feedback into neurobiological theories of learning.

Synaptic consolidation was previously only proven in very simple examples. Here we showed that the same theories can be applied in a more realistic and complex context—it really shows that the theory could be key to retaining our memories and know-how, explained the authors. After all, to emulate is to understand.

Over the past decade, neuroscience and machine learning have become increasingly intertwined. And no doubt our mushy thinking machines have more to offer their silicon brethren, and vice-versa.

“We hope that this research represents a step towards programs that can learn in a more flexible and efficient way,” the authors say.

Image Credit: Shutterstock

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured