Why Neuroscience Is the Key to Innovation in AI

The future of AI lies in neuroscience.

So says Google DeepMind’s founder Demis Hassabis in a review paper published last week in the prestigious journal Neuron.

Hassabis is no stranger to both fields. Armed with a PhD in neuroscience, the computer maverick launched London-based DeepMind to recreate intelligence in silicon. In 2014, Google snagged up the company for over $500 million.

It’s money well spent. Last year, DeepMind’s AlphaGo wiped the floor with its human competitors in a series of Go challenges around the globe. Working with OpenAI, the non-profit AI research institution backed by Elon Musk, the company is steadily working towards machines with higher reasoning capabilities than ever before.

The company’s secret sauce? Neuroscience.

Baked into every DeepMind AI are concepts and ideas first discovered in our own brains. Deep learning and reinforcement learning—two pillars of contemporary AI—both loosely translate biological neuronal communication into formal mathematics.

The results, as exemplified by AlphaGo, are dramatic. But Hassabis argues that it’s not enough.

As powerful as today’s AIs are, each one is limited in the scope of what it can do. The goal is to build general AI with the ability to think, reason and learn flexibly and rapidly; AIs that can intuit about the real world and imagine better ones.

To get there, says Hassabis, we need to closer scrutinize the inner workings of the human mind—the only proof that such an intelligent system is even possible.

Identifying a common language between the two fields will create a “virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances,” Hassabis and colleagues write.

The Problem With Intelligence

The bar is high for AI researchers striving to bust through the limits of contemporary AI.

Depending on their specific tasks, machine learning algorithms are set up with specific mathematical structures. Through millions of examples, artificial neural networks learn to fine-tune the strength of their connections until they achieve the perfect state that lets them complete the task with high accuracy—may it be identifying faces or translating languages.

Because each algorithm is highly tailored to the task at hand, relearning a new task often erases the established connections. This leads to “catastrophic forgetting,” and while the AI learns the new task, it completely overwrites the previous one.

The dilemma of continuous learning is just one challenge. Others are even less defined but arguably more crucial for building the flexible, inventive minds we cherish.

Embodied cognition is a big one. As Hassabis explains, it’s the ability to build knowledge from interacting with the world through sensory and motor experiences, and creating abstract thought from there.

It’s the sort of good old-fashioned common sense that we humans have, an intuition about the world that’s hard to describe but extremely useful for the daily problems we face.

Even harder to program are traits like imagination. That’s where AIs limited to one specific task really fail, says Hassabis. Imagination and innovation relies on models we’ve already built about our world, and extrapolating new scenarios from them. They’re hugely powerful planning tools—but research into these capabilities for AI is still in its infancy.

Inspirations from the Brain

It’s actually not widely appreciated among AI researchers that many of today’s pivotal machine learning algorithms come from research into animal learning, says Hassabis.

An example: recent findings in neuroscience show that the hippocampus—a seahorse-shaped structure that acts as a hub for encoding memory—replays those experiences in fast-forward during rest and sleep.

This offline replay allows the brain to “learn anew from successes or failures that occurred in the past,” says Hassabis.

AI researchers snagged the idea up, and implemented a rudimentary version into an algorithm that combined deep learning and reinforcement learning. The result is powerful neural networks that learn based on experience. They compare current situations with previous events stored in memory, and take actions that previously led to reward.

These agents show “striking gains in performance” over traditional deep learning algorithms. They’re also great at learning on the fly: rather than needing millions of examples, they just need a handful.

Similarly, neuroscience has been a fruitful source of inspiration for other advancements in AI, including algorithms equipped with a “mental sketchpad” that allows them to plan convoluted problems more efficiently.

A Booming Future

But the best is yet to come.

The advent of brain imaging tools and genetic bioengineering are offering an unprecedented look at how biological neural networks organize and combine to tackle problems.

As neuroscientists work to solve the “neural code”—the basic computations that support brain function—it offers an expanding toolbox for AI researchers to tinker with.

One area where AIs can benefit from the brain is our knowledge of core concepts that relate to the physical world—spaces, numbers, objects, and so on. Like mental Legos, the concepts form the basic building blocks from which we can construct mental models that guide inferences and predictions about the world.

We’ve already begun exploring ideas to address the challenge, says Hassabis. Studies with humans show that we decompose sensory information down into individual objects and relations. When implanted in code, it’s already led to human-level performance on challenging reasoning tasks.

Then there’s transfer learning, the ability that takes AIs from one-trick ponies to flexible thinkers capable of tackling any problem. One method, called progressive networks, captures some of the basic principles in transfer learning and was successfully used to train a real robot arm based on simulations.

Intriguingly, these networks resemble a computational model of how the brain learns sequential tasks, says Hassabis.

The problem is neuroscience hasn’t figured out how humans and animals achieve high-level knowledge transfer. It’s possible that the brain extracts abstract knowledge structures and how they relate to one another, but so far there’s no direct evidence that supports this kind of coding.

A Virtuous Circle

Without doubt AIs have a lot to learn from the human brain. But the benefits are reciprocal. Modern neuroscience, for all its powerful imaging tools and optogenetics, has only just begun unraveling how neural networks support higher intelligence.

“Neuroscientists often have only quite vague notions of the mechanisms that underlie the concepts they study,” says Hassabis. Because AI research relies on stringent mathematics, the field could offer a way to clarify those vague concepts into testable hypotheses.

Of course, it’s unlikely that AI and the brain will always work the same way. The two fields tackle intelligence from dramatically different angles: neuroscience asks how the brain works and the underlying biological principles; AI is more utilitarian and free from the constraints of evolution.

But we can think of AI as applied (rather than theoretical) computational neuroscience, says Hassabis, and there’s a lot to look forward to.

Distilling intelligence into algorithms and comparing it to the human brain “may yield insights into some of the deepest and most enduring mysteries of the mind,” he writes.

Think creativity, dreams, imagination, and—perhaps one day—even consciousness.

Stock Media provided by agsandrew / Pond5

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured