Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI

As obstinate and frustrating as we are sometimes, humans in general are pretty flexible when it comes to learning—especially compared to AI.

Our ability to adapt is deeply rooted within our brain’s chemical base code. Although modern AI and neurocomputation have largely focused on loosely recreating the brain’s electrical signals, chemicals are actually the prima donna of brain-wide neural transmission.

Chemical neurotransmitters not only allow most signals to jump from one neuron to the next, they also feedback and fine-tune a neuron’s electrical signals to ensure they’re functioning properly in the right contexts. This process, traditionally dubbed neuromodulation, has been front and center in neuroscience research for many decades. More recently, the idea has expanded to also include the process of directly changing electrical activity through electrode stimulation rather than chemicals.

Neural chemicals are the targets for most of our current medicinal drugs that re-jigger brain functions and states, such as anti-depressants or anxiolytics. Neuromodulation is also an immensely powerful way for the brain to flexibly adapt, which is why it’s perhaps surprising that the mechanism has rarely been explicitly incorporated into AI methods that mimic the brain.

This week, a team from the University of Liege in Belgium went old school. Using neuromodulation as inspiration, they designed a new deep learning model that explicitly adopts the mechanism to better learn adaptive behaviors. When challenged on a difficult navigational task, the team found that neuromodulation allowed the artificial neural net to better adjust to unexpected changes.

“For the first time, cognitive mechanisms identified in neuroscience are finding algorithmic applications in a multi-tasking context. This research opens perspectives in the exploitation in AI of neuromodulation, a key mechanism in the functioning of the human brain,” said study author Dr. Damien Ernst.

Modulated and Plastic

Neuromodulation often appears in the same breath as another jargon-y word, “neuroplasticity.” Simply put, they just mean that the brain has mechanisms to adapt; that is, neural networks are flexible or “plastic.”

Cellular neuromodulation is perhaps the grandfather of all learning theories in the brain. Famed Canadian psychologist and father of neural networks Dr. Donald Hebb popularized the theory in the 1900s, which is now often described as “neurons that fire together, wire together.” On a high level, Hebbian learning summarizes how individual neurons flexibly change their activity levels so that they better hook up into neural circuits, which underlie most of the brain’s computations.

However, neuromodulation goes a step further. Here, neurochemicals such as dopamine don’t necessarily directly help wire up neural connections. Rather, they fine-tune how likely a neuron is to activate and link up with its neighbor. These so-called “neuromodulators” are similar to a temperature dial: depending on context, they either alert a neuron if it needs to calm down so that it only activates when receiving a larger input, or hype it up so that it jumps into action after a smaller stimuli.

“Cellular neuromodulation provides the ability to continuously tune neuron input/output behaviors to shape their response to external stimuli in different contexts,” the authors wrote. This level of adaptability especially comes into play when we try things that need continuous adjustments, such as how our feet strike uneven ground when running, or complex multitasking navigational tasks.

The Neuro-Modulated Network (NMN)

To be very clear, neuromodulation isn’t directly changing synaptic weights. (Ugh…what?)

Stay with me. You might know that a neural network, either biological or artificial, is a bunch of neurons connected to each other through different strengths. How readily one neuron changes a neighboring neuron’s activity—or how strongly they’re linked—is often called the “synaptic weight.”

Deep learning algorithms are made up of multiple layers of neurons linked to each other through adjustable weights. Traditionally, tweaking the strengths of these connections, or synaptic weights, is how a deep neural net learns (for those interested, the biological equivalent is dubbed “synaptic plasticity”).

However, neuromodulation doesn’t directly act on weights. Rather, it alters how likely a neuron or network is to be capable of changing their connection—that is, their flexibility.

Neuromodulation is a meta-level of control; so it’s perhaps not surprising that the new algorithm is actually composed of two separate neural networks.

The first is a traditional deep neural net, dubbed the “main network.” It processes input patterns and uses a custom method of activation—how likely a neuron in this network is to spark to life depends on the second network, or the neuromodulatory network. Here, the neurons don’t process input from the environment. Rather, they deal with feedback and context to dynamically control the properties of the main network.

Especially important, said the authors, is that the modulatory network scales in size with the number of neurons in the main one, rather than the number of their connections. It’s what makes the NMN different, they said, because this setup allows us to extend “more easily to very large networks.”

Modulation Win

To gauge the adaptability of their new AI, the team pitted the NMN against traditional deep learning algorithms in a scenario using reinforcement learning—that is, learning through wins or mistakes.

In two navigational tasks, the AI had to learn to move towards several targets through trial and error alone. It’s somewhat analogous to you trying to play hide-and-seek while blindfolded in a completely new venue. The first task is relatively simple, in which you’re only moving towards a single goal and you can take off your blindfold to check where you are after every step. The second is more difficult in that you have to reach one of two marks. The closer you get to the actual goal, the higher the reward—candy in real life, and a digital analogy for AI. If you stumble on the other, you get punished—the AI equivalent to a slap on the hand.

Remarkably, NMNs learned both faster and better than traditional reinforcement learning deep neural nets. Regardless of how they started, NMNs were more likely to figure out the optimal route towards their target in much less time.

Over the course of learning, NMNs not only used their neuromodulatory network to change their main one, they also adapted the modulatory network—talk about meta! It means that as the AI learned, it didn’t just flexibly adapt its learning; it also changed how it influences its own behavior.

In this way, the neuromodulatory network is a bit like a library of self-help books—you don’t just solve a particular problem, you also learn how to solve the problem. The more information the AI got, the faster and better it fine-tuned its own strategy to optimize learning, even when feedback wasn’t perfect. The NMN also didn’t like to give up: even when already performing well, the AI kept adapting to further improve itself.

“Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems,” the authors said.

The study is just the latest in a push to incorporate more biological learning mechanisms into deep learning. We’re at the beginning: neuroscientists, for example, are increasingly recognizing the role of non-neuron brain cells in modulating learning, memory, and forgetting. Although computational neuroscientists have begun incorporating these findings into models of biological brains, so far AI researchers have largely brushed them aside.

It’s difficult to know which brain mechanisms are necessary substrates for intelligence and which are evolutionary leftovers, but one thing is clear: neuroscience is increasingly providing AI with ideas outside its usual box.

Image Credit: Image by Gerd Altmann from Pixabay

Shelly Fan
Shelly Fan
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
Don't miss a trend
Get Hub delivered to your inbox